Every application has its unique compute infrastructure requirements. This is why container orchestrators such as Kubernetes provide application engineers with the use of taints and affinities to control precisely where pods run.
In addition to maintaining the constraints that one can define in Kubernetes, Spot by NetApp’s Ocean offers several constraint options (which are defined in the workload manifest) to control whether applications may run on spot instances, run only on nodes that are never to be scaled down, and similar.
While all of this is great for ensuring your applications get the computing resources they need, it would be challenging for your devops team to understand why infrastructure is scaling the way it is or why pods are being placed as they are, without needing to dig into your YAML files.
To ensure complete transparency and drive more efficient collaboration between application and devops teams, we are pleased to introduce Ocean Scaling Constraints.
Better collaboration and transparency between application and devops teams
With Ocean’s new Scaling Constraints feature, devops teams can now easily see any constraints that the application team applied to specific Kubernetes pods. This clarity eliminates any potential confusion and errors, helping ensure that while the overall goal is to drive maximum cluster efficiency, applications that require special conditions, will run and scale as they should.
The following are the pod constraints that can be seen within the Ocean console:
- Node-lifecycle – this controls which pricing model, whether spot or on-demand, can be used for a given application
- Restrict-scale-down – this controls whether the node upon which the pod runs can be scaled down when underutilized
- Zone affinity – this controls which cloud availability zone a pod can run in
- Instance type affinity – this controls which cloud VM types a pod is eligible to be scheduled on based on node labels
Easy drill-down into pod constraints and affected nodes
With Ocean’s Scaling Constraints you can clearly see which nodes are affected by any pod constraints. For example, if you notice that there is an unusually high amount of on-demand instances being used, it might be that the node-lifecycle constraint is set to “OD” (on-demand). Likewise, if underutilized nodes are not being scaled down as they typically are by Ocean’s container-driven autoscaling, the restrict-scale-down constraint is probably set to “true” on at least a single pods running on these nodes.
In both these cases the application team have made decisions (for whatever reason) that affect infrastructure behavior, irrespective of the impact on cost efficiency and resource utilization.
Within the Nodes tab (in the Ocean console) you can check to see if there are any nodes running pods with constraints. This view provides visibility at the node level, showing their properties such as memory/CPU, zone, lifecycle (i.e. pricing model), etc.
Here you can dive deeper into the applications running on the affected nodes and find the exact constraints on each pod.
Easy filtering to find and understand constraints
The new scaling constraints component shows all instances containing pods having constraints on them, showing a summary card per constraint. Clicking on the Node Lifecyle: On Demand card, for example, would filter to only nodes with pods running on them with that constraint.
Drill-down can be continued to each node, showing the exact pods having that constraint. This way the devops team can easily understand scaling behavior and sync with application engineers on the effect and ways to remove them if desired.
Getting started with Ocean Scaling Constraints
Ocean Scaling Constraints is in GA and you can get started using it immediately in your Ocean console. Check it out today!