Inside Advanced Scale Challenges|Sunday, September 25, 2016
  • Subscribe to EnterpriseTech Weekly Updates: Subscribe by email

‘Composable Infrastructure’ and the Hole in My Back Yard 

datacenter

The restoration of my 1910 Craftsman bungalow means the house is effectively being torn down and rebuilt from the inside out. This includes a scale model in my back yard of the Mariana Trench, a.k.a. our new foundation. It’s easy to extend this metaphor to the topic of infrastructure.

The Woes of an Upgrade

When my house was built, two bedrooms and a bathroom were the norm and people wanted to separate the kitchen from the living area. Fast-forward 100 years later: My family demands not just more space but a more open concept. My house was designed for a particular workload, but this new reality is causing major architectural challenges. The result is a painful and expensive project that will take months to complete. Sound like any IT projects you’ve seen lately?

Likewise, the demands of different workloads in the data center are evolving to a point where bi-modal IT is becoming the norm: One environment is designed to manage traditional IT “run-the-business” applications like ERP. A second supports the new generation of “third platform” apps (cloud-native, mobile, social, and internet-of-things) that create new user experiences and revenue opportunities for the business. Both modes have a need for persistent storage but the application hooks into storage are dramatically different.

Composable infrastructure, also known as “infrastructure-as-code,” is a new architectural approach that enables greater flexibility to meet the ever-changing demands of both traditional and emerging workloads. The concept is simple: physical, virtual, and containerized apps can be mapped instantly and dynamically to the right resources — including the right class of storage — to meet any service level, capacity, or cost point requirements. This is the logical next step that goes beyond many hyper-converged discussions, and while the concept is simple, engineering for simplicity across compute, networking, storage for multi-tenant workloads takes a lot of innovation.

Brad Parks of HPE

Brad Parks of HPE

In your data center, different workloads have different storage needs. Some workloads, like Hadoop or object storage, were built for direct attached storage (DAS). Other workloads, like VDI, benefit from low-cost, software-defined storage that delivers predictable scaling across shared capacity. Mission-critical workloads, like Oracle databases or ERP deployments, might require extremely low-latency, highly resilient, Tier-1 flash storage to meet business objectives.

The average enterprise data center runs the gamut from DAS to SDS to external flash, and traditionally this has required separate, siloed approaches to deployment in which each application stack has its own set of resources and management tools. For many customers, their information lives in as many as six or more separate storage architectures on their data center floor. A truly composable infrastructure gives you the ability to programmatically deploy the right set of compute, fabric, and storage resources for each workload as part of a single, consolidated management approach.

Creating the utility datacenter

Think about plugging in a lamp. The power from the outlet was generated by some combination of wind, solar, hydroelectric, etc. You don’t really think about this, you just want the lamp to work when you flip the switch.

The same is true for your application owners, and even more so for application developers. The unified API of a composable infrastructure is the equivalent of your wall outlet. You use the API to ask for the resources you need for your workload. Behind that API is a software-defined intelligence layer that looks at your request and determines how to fulfill it. Through the use of intelligent templates, storage administrators can define attributes like service levels, drive types, data compaction technologies, and more. When you “plug in” an application and ask for a “gold,” “silver,” or “bronze” quality of service, the resource pool delivers the right type of storage to meet the needs of your workload—no matter what. When it’s no longer needed, you just turn off the light and resources go back into the pool.

In the end, an application owner doesn’t really need to know which storage pool he’s accessing, he just needs storage that meets the required service level when he flips the switch.

Picking the Right General Contractor

In the ‘composable’ world, many vendors present a biased approach to storage because they don’t have all the pieces. Some push you towards a certain class of storage because that’s what they sell, but it won’t really meet all your needs. Others might support more storage options, but require additional management layers, turning provisioning into a multi-step process with pre-provisioning that adds complexity. What you’re really looking for in a composable solution is the ability to automate everything in a single task — from requesting service to mapping the compute node to actually provisioning that capacity on a storage array — all through a single line of code and without any operator intervention.

There’s a difference between adding additional management layers to hide complexity vs. engineering for simplicity from the ground up. If your goal is to reduce complexity, you should be looking for a solution where simplicity is engineered into the DNA, not bolted on after-the-fact. The best way to accomplish this is a unified approach to storage and automation that is designed to work together from the start.

It Takes Vision to Expand the Limits of the Possible

For me and my house project, that meant having the vision to look beyond five layers of wallpaper, salmon colored curtains, and a 30-year-old carpet. For your data center, it means architecting for simplicity at data-center scale across both traditional and third-platform applications.

The key thing to remember about composability is that it’s not a marketing-driven construct, it’s an engineering-driven construct born from DevOps culture in response to new application-developer needs that also delivers great value to your business.

Brad Parks is the Director of Go-to-Market Strategy and Enablement for Storage at Hewlett Packard Enterprise.

Add a Comment

Share This