With modern organizations moving their businesses into the cloud, container technology has become a key enabler to realizing the cloud’s promised benefits. As infrastructure and applications are rearchitected into distributed, containerized systems, teams are also adapting to new practices and meeting new challenges, including how to manage cost in a container environment.
Supporting containerized workloads requires dynamic resources that can scale on the fly to match application requirements. While this gives teams the agility they need, it also means that predicting and managing costs can be difficult. In this blog post, we’ll answer some key questions about cloud cost optimization for containerized workloads, that Spot has helped customers answer.
How do I quickly and efficiently manage my containerized workloads?
One of the many benefits of containers is their ability to scale autonomously to meet demand. While this brings speed and agility, it can also bring chaos as new containers can be launched by multiple entities across an organization, making it difficult to keep track of workloads, who launched them and for what project.
Modern cloud management tools have powerful features that can help ops teams and budget managers gain visibility into container behavior, ensure resources are working optimally across business units, and maintain cost efficiencies. Key features to look out for include:
- Automation for tasks like spinning up new clusters, configuring auto scaling, and fetching logs can reduce the time, effort and costs required to manage workloads.
- Tagging increases the productivity of users by presenting relevant information quickly. Engineers can easily identify and control the resources they spin up, and can access and manipulate infrastructure based on relevant tags.
- Cost Showback is especially important for FinOps teams who need application level visibility in container clusters to quickly and easily perform detailed cost analysis across workloads to understand how much each project or team is spending, and make the right allocation decisions.
How do I avoid unexpected cloud costs?
For many businesses, a key factor in moving to cloud is cost savings, however, with container-based architectures it’s not uncommon for unexpected costs to pop-up in your bill. The dynamic, and sometimes unpredictable nature of container clusters, means that fluctuations in demand, or even a simple misconfiguration can cause a cluster to scale dramatically, resulting in a bigger bill at the end of the month.
The ability to identify trends in container behavior, like a workload that is scaling too rapidly, can mitigate unforeseen charges. Similarly, a sudden drop in a cluster’s size or cost may indicate a performance issue. Early trend identification and proper alerting allows ops teams to remain within their budgets, and keep services available.
What can I do to ensure container scalability while minimizing costs?
The ability to auto scale resources for your containers is critical both for cost and performance, but in doing so, you’ll often face a catch-22. You want your containers to launch immediately and be highly available, so you pad them with extra capacity. But keep too much spare compute power, and you’re looking at an overprovisioned cluster with a higher bill. On the flipside, underprovisioning in an effort to cut costs and reduce waste can lead to performance issues or errors in your application.
When it comes to auto scaling in a containerized environment, decisions should be aimed at the resource requirements of all the workloads within a cluster, not the CPU or memory of individual machines (which is what traditional scaling metrics look at). An intelligently managed cluster will auto scale to ensure no workload is pending, all running workloads have their requirements met, and all the underlying resources are as highly utilized as possible.
What is the best way to define my container resource requirements?
If auto scaling containers is most effective when based on the resource requirements, then defining and right-sizing those requirements is critical, but often challenging. As such, users need to define the size and number of compute cloud instances, as well as the requirements of each individual application. Without those, applications may use up significantly more resources than they actually need, and at scale this can amount to big numbers on the cloud bill.
The ability to visualize the application’s performance and resource utilization in real time is essential for adjusting requirements to be as close to actual utilization and will result in a cluster that is highly utilized.
What do cloud providers offer to help me optimize container costs?
Cloud service providers offer various, discounted pricing models such as long-term commitment models (e.g. reserved instances) and spare compute capacity (e.g. spot instances and preemptible VMs). Both options can be challenging to implement. Rapidly changing project requirements of container workloads can make it difficult to make a long term commitment to one compute type, while running spot instances on your own can require a significant amount of configuration, maintenance and expertise to ensure availability.
Despite the challenges, these offerings nevertheless hold the key to significant cost savings when applied in the right way. For instance, when a project becomes part of a production environment it is possible to identify consistently-used compute resources that can serve as the basis for reserved instances. For container workloads where the orchestration layer abstracts the underlying infrastructure, low-cost, yet volatile spot instances are a more natural fit, with intelligent orchestration providing the requested CPU and memory.
Ocean by Spot and how it fits in with what we’ve learned
Spot identified the above points early on. As existing clients shifted to containerized environments, they shared their daily challenges, and Spot designed its offerings with those in mind.
Ocean by Spot is a solution for container infrastructure management that provides a serverless experience. It does so by automating away the selection of instance types, making use of smart auto scaling that provides the cluster with optimal resources in terms of type and size, while keeping an intelligently defined buffer of spare capacity. Machine learning algorithms ensure that the cluster is running on the most optimal blend of pricing options.
Ocean eliminates the downsides of leveraging spare capacity, and provides additional features such as built in support for reservations, cost showback, rightsizing, custom launch specifications and detailed logs. This results in a cluster that is optimized for performance as well as cost, and all of it is easily reproducible with Terraform or Cloudformation templates.
Moreover, Ocean is part of a platform that has additional products that work seamlessly with it. Cloud Analyzer provides customers with high level views of their entire cloud ecosystems, and lets them identify savings opportunities and optimize their containerized workloads with Ocean with a click of a button. Spot Eco, automates entirely the management of reserved capacity and savings plans, which Ocean makes use of when relevant.
All of these solutions are packaged with a single platform, and backed by SLAs and 24/7 support, so if you are interested in learning how Spot can help you optimize your container workloads, get in touch today.