At Spot, we’re seeing more customers expand their Kubernetes operations, running production workloads with success and seeing positive improvements in performance and agility. Now, these companies are looking to optimize Kubernetes Day 2 operations, with a goal of cost optimization and infrastructure efficiency.
Ocean by Spot gives users the leverage to take full advantage of the cloud resources in their container environments. Offering a serverless experience for container infrastructure management, Ocean can deliver up to 90% savings on cloud computing with built-in pricing optimization and right sizing capabilities.
To support our customers in this journey, we continue to add features and enhancements to Ocean that enable richer visibility, easier use, and faster time to market while saving money. The updates below are expected to roll out in the next few weeks, and we’re excited to share this preview of them with you.
Virtual Node Groups
Launch specifications allow users to manage different types of workloads on the same clusters that can span multiple AZs and subnets. It’s a key feature in Ocean that goes beyond describing instance properties, offering governance mechanisms, scaling attributes and they are responsible for the worker node groups in the cluster. Launch specs deliver physical separation between different Kubernetes workloads, allowing teams to carve out the infrastructure within a single cluster as per their needs.
We are renaming launch specifications to Virtual Node Groups (VNGs) to bring more clarity and improve our users experience in Ocean. Within the next few weeks, users will see a new tab on the Ocean console for easy access to this key feature. Here, users will still be able to configure labels and taints for their workloads, and have a view into data on resource allocation and headroom.
Node lifecycle flexibility
Running on spot instances affords a significant discount, and with Ocean, companies have been able to ensure high availability with SLA while achieving desired cost savings. Ocean users can now define how much of their infrastructure they want running on spot, reserved and on-demand instances. This feature helps users migrate their workloads to spot instances at their own pace. With the ability to use partial spot and on-demand instances, users will benefit from cost savings, low risk and high .
For immediate scale up, Ocean maintains automated headroom—a buffer of spare capacity. In recent months, we introduced the ability to customize headroom configurations instead of a fixed allocation. Now, we’re lifting the veil by offering richer visibility into headroom capacity calculation in your cluster. Headroom allocations, broken down by CPU, memory and GPU, can be seen granularly at the node, or aggregated at the VNG and cluster levels. As before, you have complete control on the process by being able to manually configure custom headroom units as you see fit.
Improved auto scaling
Autoscaling in Ocean has always been geared towards responsiveness and speed. An instance is spun up as soon as an unscheduled pod is created—provisioning requests are often made within one second of pod creation. In Kubernetes however, there is the additional challenge of scaling many instances at one time, for example, when there is a sudden and significant change that requires provisioning hundreds more nodes. To do this without major node over-provisioning while maintaining proper market distribution can be a challenge, especially when different pods may require different node configurations.
To support this kind of use case in Ocean, we recently added the ability to concurrently launch various machine types across data centers, leveraging spot market availability statistics to do so without over-provisioning. When you opt into this new feature in your account, expect your machines to scale up much faster when you need to scale big fast. .
Get started with Ocean
These features, along with all of Ocean’s capabilities for container right sizing, pricing optimization, infrastructure provisioning and auto scaling, makes it a powerful tool for your applications to take full advantage of the underlying capabilities of cloud compute infrastructure.