Kubernetes: 5 Key Factors to Consider Before Starting Up - Spot.io

Kubernetes: 5 Key Factors to Consider Before Starting Up

Reading Time: 5 minutes

As enterprises move towards container first strategy for both legacy and greenfield applications, they are seeing Kubernetes as the standard for container orchestration. Containerization makes complete sense as enterprises embrace DevOps and Cloud Native Application Architectures. 

Containers not only improve the developer productivity but also operational efficiencies. By embracing an elastic infrastructure and Kubernetes as the container orchestration plane, enterprise IT can transform themselves from being gatekeepers to being part of core innovation group. 

They not only empower the developers to be more productive and innovative, they also remove the friction along the DevOps pipeline, making the flow of application code from developer laptop to production environment much smooth. 

In the last survey on Kubernetes by Cloud Native Computing Foundation (CNCF), they have found that 

  • 40% of respondents from enterprise companies (5000+) are running Kubernetes in production
  • The top three reasons for the adoption of cloud native technologies like Kubernetes are  faster deployment time, improved scalability, and cloud portability

As more and more organizations consider Kubernetes, it is important to understand some of the critical factors that can impact the success of their transformation journey. In this blog post, we will highlight five key factors to consider before starting with Kubernetes.

Managed Kubernetes

One of the first decisions to consider is whether you want to deploy Kubernetes On-Premise or in the public cloud. If you decide to use one or more public cloud providers for Kubernetes, it is important to consider whether you want to deploy Kubernetes on the virtual machines provided by the cloud providers (e.g. deploying Kubernetes from scratch on EC2), thereby taking over the responsibility of managing the lifecycle of underlying nodes and their availability or using Managed Kubernetes services like Amazon EKS, Google Cloud’s GKE, Azure’s AKS, etc. 

The DIY approach to Kubernetes on the virtual machines offered by the cloud providers is too complex and operationally inefficient. It makes operational sense to use Managed Kubernetes service over the DIY approach.

Apart from the complexity associated with the deployment of Kubernetes, the Day 2 operations like managing the Kubernetes clusters on the nodes, ensuring HA and meeting the SLA needs have a high operational overhead impacting both agility and costs of running Kubernetes clusters. 

With Managed Kubernetes offerings, organizations can offload some of these operational tasks to the cloud provider and focus on running the apps on the Kubernetes clusters. As you evaluate the Managed Kubernetes offerings, take into account your SLA needs and how much of Day 2 operations are offloaded to the cloud provider. 

Serverless Containers

Even though Managed Kubernetes offloads some of the operational tasks related to the Kubernetes Control Plane, the management of virtual machines that forms the worker nodes are still the responsibility of the users. This adds considerable operational overhead as users are responsible for managing the virtual machines underneath the Kubernetes clusters, running applications. Even with automation, managing the workers nodes to ensure HA, managing the health, etc. is an additional operational overhead and an undifferentiated heavy lifting. 

The key to avoiding this overhead lies in using a platform that takes care of managing the virtual machines, leaving the orchestration and management of container clusters in the hands of the users.

While some of the serverless container offerings by cloud providers address these needs, they are very expensive (a fact highlighted in the next section) and have other limitations like lack of multi-cloud support and, in the case of some cloud providers, lack of Kubernetes support. 

Enterprises wanting to avoid the operational costs associated with managing the underlying virtual machines should consider using platforms like SpotInst Ocean to run serverless containers.

Cost Savings

DIY Kubernetes on virtual machines in the cloud or Managed Kubernetes offerings, adds severe infrastructure cost inefficiencies along with the additional operational costs described in the two sections above. As organizations embrace containers for their cloud native workload, they need to focus on two aspects to save costs:

  • Right-sizing of the underlying virtual machines
  • Using Spot Instances to save costs while meeting their SLA needs

With the right automation platform, driven by analytics about the behavior of Spot Instances offered by the cloud providers, organizations can save up to 90% of their costs.

When using serverless containers, it is important that the underlying virtual machines are used efficiently by right-sizing to fit the container or cluster sizes. Any resource waste  caused by oversized virtual machines underneath, defeats the very use of serverless containers.

Additionally, by leveraging Spot Instances while using On-Demand and Reserved Instances as backup for meeting SLA requirements, organizations can save considerable money in infrastructure costs.

Application Architectures

When cloud computing became the norm for procuring infrastructure needed to run applications, the industry went through a paradigm shift in how the infrastructure resources were treated. Pets vs. Cattle was highlighted as the difference between legacy infrastructure and cloud based infrastructure. Traditionally, the servers are irreplaceable and applications are built with reliability in mind.  Servers were treated like pets and taken care of to ensure application reliability. 

With virtual machines on cloud, the self service interface for provisioning resources made virtual machines disposable, leading to the idea of disposable infrastructure. 

With cloud, infrastructure was treated more holistically than any individual virtual machines (which can be replaced with a new one), thereby, leading to the idea of infrastructure as cattle. 

With containers, this argument can be extended to applications and this puts focus on the application architectures that are suitable for containers. 

Even though legacy workloads can be encapsulated with containers and used in Kubernetes, there is an operational overhead and costs involved in running stateful applications on Kubernetes. The application architectures best suited for Kubernetes include stateless applications, microservices architectures, CI/CD use cases, batch processing, etc.. Other applications architectures can be containerized but it is important to understand the additional operational overhead and costs.

Observability

While Prometheus can be used for monitoring the health of Kubernetes clusters, an enterprise scale deployment requires a more detailed view of the health of Kubernetes deployment. It is important to visualize the cluster topology and handle container level monitoring.

Without detailed monitoring of Kubernetes cluster usage and efficiency along with automated remediation, inefficiencies will creep in, impacting the very advantage Kubernetes offers in terms of operational efficiencies and cost savings. 

Kubernetes brings some unique challenges and having robust monitoring and automation is critical for smooth running of the clusters. Invest in platforms that provide detailed monitoring and observability features to take more proactive actions to meet the uptime needs. 

Conclusion

Kubernetes helps enterprises be more agile and more efficient in infrastructure usage. However, it is important to consider factors like using serverless containers, Spot Instances, built-in observability along with analytics driven automation to maximize the ROI. The factors listed in this blog post can help organizations to achieve this goal. 

To see how Spotinst can help with Kubernetes, getting started here for free!