Advanced Computing in the Age of AI | Thursday, April 18, 2024

Optimizing Container Spend: 5 Recommendations 

Source: shutterstock

As container technology moves into the mainstream, users are concerned about the next step: container optimization. Typically, conversations with customers focus on production environments., but recently they’ve become more focused on how to optimize container spending.  

Kubernetes – which seems to be the most popular of container services among our customer base – does allow for several ways to optimize for costs and to maximize performance. Below, I have identified five specific opportunities ripe for container optimization. Take a look at these within your own environments. 

1) Turn Off Idle Pods 

Many standard instances/VMs and databases in non-production environments are idle outside of working hours and can be turned off or “parked.” The same case exists for pods, which in non-production environments can and should be scheduled in the same way.  

2) Rightsize Your Pods 

Kubernetes pods are the smallest deployable computing units in the Kubernetes container environment. It is a common practice to use a standard template for limits and requests for pod provisioning. If requests describe the minimal requirement for the CPU and memory for a pod to be scheduled on a node, the limits describe the max amount of CPU and memory the pod can consume on that specific node. Typically, engineers set the initial limits by using a rule of thumb, such as doubling it just to be on the safe side and then planning to change it later once they have some data to look at. As with many things in life, “later” rarely happens. As a result, the footprint of the cluster inflates over time, exceeding the actual demand for the services running inside the cluster.  

Think about it: if every pod is over-provisioned by 50 percent and the cluster is always is 80 percent full, that means 40 percent of the cluster capacity is allocated but not used, or simply put — wasted.

3) Consider Storage Opportunities 

Out of the box, containers lose their data when they restart. This is fine for stateless components but becomes an issue when a persistent data store is required. One place to look for additional container optimization opportunities is the overprovisioning of persistent storage (EBS, Azure Storage Disks, etc.) related to your containers. There are several options to optimize container storage, particularly virtualized storage that can be shared by multiple containers, and which persists over time, without being destroyed when individual containers are destroyed. There are a few different persistent-storage plugins and plugin-driven storage solutions available from third-party vendors. 

4) Rightsize Your Nodes 

Too many worker nodes are the wrong size and type. Kubernetes permits co-allocating the applications on the same nodes, which can dramatically reduce the cloud bill. Yet, incorrectly sized instances and volumes can lead to the inflation of the cost of Kubernetes clusters. Rightsizing could save up to 50 percent (particularly if no previous action has been taken to rightsize your nodes.) 

Another consideration: smaller nodes have a higher relative OS footprint and increased management overhead. The smaller the node, the higher the number of stranded resources. Stranded resources are CPU or memory, which are idle yet cannot be allocated to any of the pods, because the pods that are to be scheduled are too big to claim it.  If a pod’s sizes are close to the size of the node (server) the percentage of stranded resources gets higher. 

5) Review Purchasing Options 

All of the preceding options are related to the actual configuration of your container infrastructure.  Just as important is ensuring that your purchasing options align with your needs. Ensuring the correct instance/VM purchase type for your containerized infrastructure is critical to ensuring flexibility and maximizing ROI. Carefully analyze your purchasing options (e.g. on-demand, reservations and spot) to select the right option for your workload, both in terms of size and usage schedule.  Note that reserved instances are not always the best option for resources that can be scheduled to be turned off. Leverage cost optimization tools to support the earlier options for instance scheduling and rightsizing. Such tools can often change the equation and help avoid lock-ins and up-front commitments. 

Container Optimization is Just Another Kind of Resource Optimization 

The opportunities to save money through container optimization are really no different than for your non-containerized resources. Native tools, from either the cloud provider or open source, can help with this, but their capabilities are limited. For a fully optimized environment, you’ll want to take advantage of the growing ecosystem of specific, cost-optimization tools. 

Jay Chapel is CEO of ParkMyCloud  

 

EnterpriseAI