Advanced Computing in the Age of AI | Thursday, March 28, 2024

OpenStack Sees Widespread Enterprise Adoption 

As a recent attendee and speaker at the OpenStack Summit in Atlanta, I was amazed at what a difference a year makes in the life of the OpenStack project. I come from a datacenter operations background and my interest has always been in how to make today's enterprise customers go faster and run their operations more efficiently. If I compare this event to the OpenStack Summit last April in Portland, you can see some very dramatic changes in a short amount of time.

A year ago the hallway conversations were about functional abilities in OpenStack ("Does it work yet?"), whereas at this Summit the hallway conversations were about the most effective operations ("We know it works. How can we optimize operations?"). Also, the crowd in attendance was very different. This year's Summit had many customers there asking questions and kicking the tires while last year's Summit felt very much like a developer homecoming.

I'm often asked what it will really take to see widespread adoption of OpenStack in the enterprise. Before I answer that, I think we need to take a step back and consider both the goals and the challenges OpenStack has faced to date. We also need to take into account how enterprise workloads are evolving. The applications required to run critical enterprise workloads must be matched to an infrastructure type that can support the ever-changing needs of the business. As these applications shift, so must the infrastructure.

What is the goal of OpenStack? While this simple question can have many possible answers, I see the goal of OpenStack as an open source abstraction layer above the datacenter infrastructure and below the application workloads. In addition, this environment should be free from vendor lock-in. This abstraction layer allows the creation of dynamic pools of infrastructure that can be automated in a scalable, efficient way. Infrastructure resources are presented to a workload when needed and returned when no longer needed. As the enterprise consolidates and adopts multiple workloads or tenants on a single cloud management platform, OpenStack must also be flexible and efficient.

This is the long-term goal of any cloud management platform, but most will agree OpenStack isn't quite there today. There are a number of operational challenges that need to be overcome in the short term. First, enterprises traditionally consume products in a different model than OpenStack. Most enterprises purchase COTS (Commercial Off The Shelf) software and either implement it themselves or pay a consulting fee and ongoing support for the product over time. By contrast, OpenStack in the purest form is an open source collection of projects that must be installed and interconnected, and support is provided for each project by the community. There is no single "throat to choke" when things go wrong. Who do you call for support? While this model is embraced by developers familiar with open source, it often will cause traditional enterprise operations folks heartburn because the product model and the open source model are so vastly different in both initial consumption as well as ongoing support. This is where OpenStack project vendors come into to play. There are many companies with distributions that take the various OpenStack projects and combine them into a functioning, tested, and supported product version of OpenStack.

The next challenge for OpenStack in the enterprise is maturity. With the current Icehouse release, we do appear to have checked a lot of the functional boxes and we are now moving into improving IT operations from version to version. Only time and successful implementations can solve the perception of immaturity in the market. OpenStack needs more reference customers in the enterprise willing to talk openly about what is great (and sometimes not so great) so it can continue to mature as a product and gain a reputation as a project CIOs are willing to bring into their organization with minimal perceived risk. At a minimum, CIOs will need to be able to answer the question, "How do I operate my OpenStack environment for the next three to five years?"

The last challenge I see in the industry currently is talent. Traditional enterprise operations are not equipped with the skillsets often required for OpenStack. Many great programs exist to educate users on OpenStack, but we need deeper, operations-level training. Most enterprises do not have the time or resources to staff a team of dedicated, specialized OpenStack engineers or developers and support staff.

Now that we have covered some of the challenges facing OpenStack in the enterprise, what are some possible solutions? From an operations perspective, I like to analyze any potential solution in two different phases. First, what will it take to install and configure this solution? To put it another way, what would it take to go from bare metal hardware to a working OpenStack system? Secondly, what will it take to operate this solution over the lifetime of the application? Let's break each of these questions down a bit into potential solutions.

After the OpenStack Summit in Atlanta, I believe we have turned the corner on making OpenStack easier to install and configure. It was very evident that a number of companies are offering either custom distributions of OpenStack or reference architectures that will allow you to build your infrastructure to your needs. I do believe this process can be improved through the creation of reference architectures that go beyond just OpenStack and take into account both the underlying infrastructure OpenStack is installed upon as well as the applications, workloads, and tenant requirements above OpenStack. This is where I see potential growth in the future as well as increased adoption of OpenStack in the enterprise. This will allow a typical enterprise to get up to speed quickly and independently without the need for costly outside consultation.

Next we have the issue of operations of OpenStack over time. As noted by many sessions at the OpenStack Summit, the focus has shifted from implementing required features to thinking about how to optimize existing features and making process workflows more efficient. Complexity never scales well. It is only through a robust infrastructure project that is easily manipulated through automation that OpenStack will succeed long term in the enterprise. The dynamic pools of resources offered by OpenStack need to be offered in an automated fashion and also need to be easily upgraded from version to version. OpenStack has made great strides to simplify upgrades while still maintaining backwards compatibility to previous versions, but more needs to be done here.

As we move to a more turnkey-based OpenStack distribution or reference architecture approach, we have solved all of the challenges named above. We have removed some of the complexities of OpenStack; we have confidence in the solution as a pre-validated, tested, and supported foundation; and we no longer require an OpenStack specialist just to implement the environment.

I believe a holistic approach to a working OpenStack reference architecture gets us much closer to the idea of a scalable, dynamic, agile OpenStack infrastructure that will serve as the bridge to the next generation of datacenter architecture and solutions. We just need to continue creating a scalable, agile, dynamic, and predictable foundation that serves both the needs of the business while at the same time making the complex more simple to operate.

Aaron Delp is a cloud solutions architect specializing in the creation of OpenStack, CloudStack, and VMware based cloud solutions. Prior to SolidFire, Delp was senior director of technical marketing for the Cloud Platforms Group at Citrix Systems, where he led the generation and publishing of reference architectures, technical documentation, competitive intelligence, and field enablement content. In addition, Aaron led the cloud field enablement team for VCE, specializing in management, orchestration, and automation products from VMware, Cisco Systems, EMC, and CA Technologies. He spent over ten years at IBM holding various positions supporting business partners, vendors, and distributors to design and deploy data center solutions.

EnterpriseAI