Spot Ocean now supports Kubernetes pod topology spread constraints - Spot.io

Spot Ocean now supports Kubernetes pod topology spread constraints

As a premium autoscaler for containers and Kubernetes applications, Spot Ocean automatically and continuously executes scaling actions based on the requests and specified constraints of pods and specific containers. This container-driver autoscaling approach is core to how Ocean leverages and optimizes the compute infrastructure required to run containers in the cloud. 

To help Kubernetes users meet different business needs such as high availability, low latency and controlled saturation of their applications, the container orchestrator has several mechanisms for governing  pod scheduling. Among these mechanisms are affinity rules and pod disruption budgets. Initially introduced in Kubernetes v1.16, and went GA in v1.19, pod topology spread constraints was added to the mainline Kubernetes feature set. This mechanism aims to spread pods evenly onto multiple node topologies. 

We’re happy to share that Ocean now fully supports pod topology spread constraints, and scales according to the different topologies requested by pods, as well as other changing dynamics.

Why use pod topology spread constraints?

One possible use case is to achieve high availability of an application by ensuring even distribution of pods in multiple availability zones. In this way, service continuity can be guaranteed by eliminating single points of failure through multiple rolling updates and scaling activities. For example, if the application has 15 replicas and nodes in three availability zones, Kubernetes will actively schedule exactly five replicas on node(s) in each AZ.

How does autoscaling work with Kubernetes pod topology constraints? 

In an autoscaling-enabled infrastructure environment, where nodes are added and removed continuously, it is crucial that the infrastructure autoscaler is aware of the topology spread requested by pods at all times. Spot Ocean actively bin packs  pods over nodes to mitigate compute resource waste. Therefore, only hard requirements will trigger the provisioning of additional nodes. In the case of spread constraints, Ocean is aware of the whenUnsatisfiable:DoNotSchedule and the requested maxSkew configuration options, and acts accordingly.

When pods contain spread constraints, Ocean is aware of their labels and can provision nodes from all relevant topologies. Before the initial apply action of these pods, it’s required to have at least a single node from each topology so Kuberentes is aware of their existence. A single node from each topology can easily be configured in Ocean’s headroom feature or setting minimum nodes per VNG .

Take, for example, an Ocean virtual node group with two subnets in two availability zones (or alternatively, two VNGs with a subnet in a different AZ). When the following Kubernetes deployment manifest is applied, Ocean will scale up nodes from both subnets/VNGs. 

topologySpreadConstraints:
  -
maxSkew: 1
    
topologyKey: topology.kubernetes.io/zone
    
whenUnsatisfiable: DoNotSchedule
    
labelSelector:
      
matchLabels:
        
app: myApp

When scaling down, Ocean considers spread constraints and will only scale down if maxSkew is kept.

Note: Ocean Controller version 1.0.78 or later is a prerequisite for using pod topology spread constraints.

In line with the known upstream limitations regarding down scaling a deployment, Ocean can continue to maintain infrastructure with an uneven spread. Descheduling pods or updating the spread constraints and then performing a rolling update would trigger Ocean to rebalance the infrastructure automatically. 

Start autoscaling with Ocean

Supporting the growing feature set of Kubernetes is an integral part of the Spot Ocean road map, and we continue to add new features of our own that extend these capabilities even further.  Learn more about the many features of Spot Ocean and get started with Spot today!