Running Your Kubernetes Kops Cluster on Spot Instances with Elastigroup

Reading Time: 5 minutes

Kubernetes has become the defacto standard for container orchestration. However, installing and maintaining it yourself in production can be a challenge. Luckily, kops was created to simplify the creation and management for a production-grade Kubernetes cluster on public cloud platforms. Organizations that are looking to move their workloads to the public cloud will love this but costs and workload scaling will become a concern after their migration.

An efficient way to reduce costs on AWS is by using excess-capacity instances known as Spot. Users can reduce costs up to 90% compared to on-demand instance types. However, they can be terminated with a short notice and there is no efficient way to manage workloads when they get terminated. Luckily, Elastigroup is here to help!

When a Kubernetes cluster is deployed using KOPS and Elastigroup, your nodes will be deployed on Spot instances and you will start to reduce operating costs. The Kubernetes Master instance is deployed by default on multi-demand instances for more stability.

When a Node based on a Spot instance is scheduled to be terminated, Elastigroup will fallback to another Spot or on-demand instance, join it to the k8s cluster, and make sure to drain all the pods from the soon to be interrupted node. Elastigroup also allows you to utilize your Reserved Instances before starting to launch additional spots. Besides reducing costs, Elastigroup’s Kubernetes Autoscaler will manage your Kubernetes infrastructure to automatically scale to meet your Pod’s requirements.

In this post, I will discuss how to create a Kubernetes cluster with kops on AWS EC2 Spot Instances using Elastigroup and explain how it can scale pods and instances easily without user intervention so you can spend more time with other tasks instead of managing infrastructure.

Prerequisites

  1. A Spotinst account
  2. An AWS account
  3. Kubectl and kops binaries downloaded and installed per our documentation.
  4. Spotinst Kops shell scripts
  5. AWS CLI
  6. An empty S3 bucket used to store the kops configuration.
  7. An AWS Route53 sub-domain

Configuring the Domain

To get started with kops, we will need to first configure AWS Route53 with a sub-hosted zone using the AWS CLI using the commands below. Kubernetes will use the Route53 sub-hosted zone as an endpoint for accessing the cluster and for accessing services created. Substitute mycluster with your desired cluster name and mydomain with your domain name on Route53.

The output of the above command will display a list of nameservers that are needed for the next step. Create a new file called subdomain.json with the following output:

Substitute Name with your cluster name and the ResourceRecords values with the nameservers from the previous command. Save and exit the file and import it with the following command:

It may take a few minutes to create and propagate the DNS changes.

Creating the Cluster

Before we can create a Kubernetes cluster, we need to download and extract the Spotinst kops shell scripts. Once extracted, navigate to the folder and use your favorite text editor to modify the 00-env.sh script. In this file, we need to configure the details for our deployment by modifying the following variables:

Fill in your AWS access and secret key. Your Spotinst Account ID is located under My Account in the Spotinst Console. To obtain your Spotinst token, click on the API section of My Account and generate a new token. For KOPS_STATE_STORE, enter an AWS S3 bucket to use. If you do not have an S3 bucket created already, please do so. The specified S3 bucket stores the state of the cluster. Finally, for KOPS_CLUSTER_ZONES, enter the AWS region to use such as “us-west-2a”.

After that, we can create the cluster:

This process can take a few minutes to complete. To check the status of the cluster, we can run the validate script:

Kubectl is the CLI utility used to interact with a Kubernetes cluster. When the cluster is in a ready state after running the validate script, kubectl will automatically be configured to connect to the cluster.

If you return to the Spotinst Console and search by the cluster name, you will be able to see that an Elastigroup was created for the master node and an additional one for the nodes.

kopsThe nodes Elastigroup has only a single instance. As our pods increase, Elastigroup will scale up the number of instances in the cluster. Let’s give it a try by running a sample application and scaling it up to ridiculous amounts.

Running a Sample Application

To deploy the sample app, download the WordPress Kubernetes manifest to your computer and run the following commands:

This manifest will create three pods consisting of Nginx, MySQL, and PHP-FPM. The status of the pods can be checked with the following command:

When all of the pods are un a Running state, we can access WordPress by getting the URL under EXTERNAL-IP for the Nginx container:

Configuring WordPress (Optional)

Copy and paste the address in your web browser to start configuring WordPress:

Before continuing, we will have to create a WordPress database in the MySQL pod:

Now you can return to WordPress and set it up.

When configuring WordPress, use mysql for the database host. The MySQL username is root and the password is sql. When WordPress is configured, we can begin to scale the cluster.

Scaling Kubernetes

Currently, we only have a single node Kubernetes cluster running. If we try and scale a pod now, it will not work because the maximum capacity is set to one. Under Actions, select Manage Capacity, and set Maximum field to 10 and click UPDATE.


Now we can scale the pods to trigger a scaling event in Elastigroup. Let’s scale up the Nginx pod to 50 instances:

(Please note that since persistence storage is not used for this WordPress sample app, you will be greeted with the WordPress configuration tab each time after scaling the Nginx pod).

After a minute or so, new Kubernetes instances will be automatically created by Elastigroup and Kubernetes will schedule all 50 pods to run across the nodes:

Now let’s return to the Spotinst Console and take a look at the Elastigroup:


You can now see that there is a total of 4 instances running and a lot of nginx pods. Also, look at the savings so far, 72.19% for just an hour. Imagine the savings if this was a production cluster!

Elastigroup tracks scaling events under the TIMELINE tab. Looking at the image below, we can see that Elastigroup had three Scale Up events:

Scaling Down

Since Elastigroup manages the underlying infrastructure autonomously, we can scale down the number of running instances simply by scaling the number of nginx pods down from 50 to 10.


After a few minutes, you will see the number of instances decrease in the nodes Elastigroup:

Conclusion

In this post, I went over how to get started with Elastigroup and explained how it can manage your Kubernetes cluster on Spot Instances. With Elastigroup, organizations can significantly reduce Kubernetes costs by taking advantage of Spot Instances versus using reserved or on-demand instances. Without having to worry about the underlying infrastructure, you can spend more time being productive in developing and maintaining applications.