
Following our recent blog post, how to use Kops with Elastigroup.
We’ve explained how to download and install Kops and provided a set of commands for installing
, operating
, updating
and deleting
Kuberenetes clusters in Spotinst.
Please download the following tar file, which contains a set of commands that we will be using in that blog post.
In this blog, we will deep dive into Kops, we’ll create Highly-Available k8s Masters, set up a k8s cluster using private subnets for networking and will operate that cluster using a Bastion server. We’ll also edit and create new Instance Groups and set custom labels.
All worker nodes will be running as highly-available EC2 Spot Instances, allowing you to save up to 90% of your compute costs.
Before getting started, Please download and install Kops using this link
Create The k8s Cluster
. 00-env.sh && kops create cluster \ --name $KOPS_CLUSTER_NAME \ --zones $KOPS_CLUSTER_ZONES \ --cloud $KOPS_CLOUD_PROVIDER \ --master-size $KOPS_MASTER_SIZE \ --master-count $KOPS_MASTER_COUNT \ --node-size $KOPS_NODE_SIZE \ --spotinst-cloud-provider $SPOTINST_CLOUD_PROVIDER \ --kubernetes-version $KOPS_KUBERNETES_VERSION \ --logtostderr --v 2 \ --topology $KOPS_TOPOLOGY \ --bastion \ --networking calico \ --yes
Please note the --topology private
, --networking calico
and --bastion
flags which means that our cluster will be created using private subnets in a VPC, using the calico network driver (the most common one) and for management access, we will use the bastion server to access the worker nodes.
amirams$ ./01-create.sh I1227 15:21:14.793955 57381 s3context.go:163] Found bucket "kops-statestore" in region "us-east-1" I1227 15:21:14.794006 57381 s3fs.go:176] Reading file "s3://kops-statestore/stav.ek8s.com/config" I1227 15:21:15.277178 57381 channel.go:93] resolving "stable" against default channel location "https://raw.githubusercontent.com/kubernetes/kops/master/channels/" I1227 15:21:15.277214 57381 channel.go:98] Loading channel from "https://raw.githubusercontent.com/kubernetes/kops/master/channels/stable" I1227 15:21:15.277460 57381 context.go:140] Performing HTTP request: GET https://raw.githubusercontent.com/kubernetes/kops/master/channels/stable I1227 15:21:15.712665 57381 channel.go:107] Channel contents: spec: images: ... ... ... I1227 15:24:06.017173 57381 privatekey.go:157] Parsing pem block: "RSA PRIVATE KEY" I1227 15:24:06.017757 57381 s3fs.go:176] Reading file "s3://kops-statestore/stav.ek8s.com/secrets/kube" I1227 15:24:06.129402 57381 loader.go:357] Config loaded from file /Users/amirams/.kube/config Kops has set your kubectl context to stav.ek8s.com Cluster is starting. It should be ready in a few minutes.
The cluster has been created.
Now, we should expect to see in our Spotinst console 5 Elastigroups starting
bastions.stav.ek8s.com
– On-Demandmaster-us-east-1a-1.masters.stav.ek8s.com
– On-Demandmaster-us-east-1b-1.masters.stav.ek8s.com
– On-Demandmaster-us-east-1c-1.masters.stav.ek8s.com
– On-Demandnodes.stav.ek8s.com
– Spot Instances
Let’s wait for a couple of minutes, and validate the cluster
amirams$. 00-env.sh && kops validate cluster
Using cluster from kubectl context: stav.ek8s.com Validating cluster stav.ek8s.com INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS bastions Bastion m3.medium 1 1 utility-us-east-1a,utility-us-east-1b master-us-east-1a-1 Master m4.large 1 1 us-east-1a master-us-east-1a-2 Master m4.large 1 1 us-east-1a master-us-east-1b-1 Master m4.large 1 1 us-east-1b nodes Node m4.large,m3.large,c4.large,c3.large 2 2 us-east-1a,us-east-1b NODE STATUS NAME ROLE READY ip-172-20-38-156.ec2.internal node True ip-172-20-51-72.ec2.internal master True ip-172-20-58-136.ec2.internal master True ip-172-20-75-131.ec2.internal node True ip-172-20-89-110.ec2.internal master True Your cluster stav.ek8s.com is ready
Edit Instance Group and Add Custom Labels
Let’s now go over the commands and sequence that needed for Editing an existing instance-group and then adding custom labels to it.
Edit Instance Group:
. 00-env.sh && kops edit ig nodes \ --name $KOPS_CLUSTER_NAME \ --state $KOPS_STATE_STORE \ --logtostderr --v 2
This command will download the cluster state from s3, and open a vim
terminal to make the changes to the default instance-group
called “nodes”
apiVersion: kops/v1alpha2 kind: InstanceGroup metadata: creationTimestamp: 2017-12-27T23:21:17Z labels: kops.k8s.io/cluster: stav.ek8s.com tasks: cpu name: nodes spec: image: kope.io/k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-12-02 machineType: m4.large,m3.large,c4.large,c3.large maxSize: 2 minSize: 2 role: Node subnets: - us-east-1a - us-east-1b
I’ve added the row tasks: cpu
to distinguish that these nodes are CPU nodes
I1227 16:31:20.271698 59454 s3fs.go:176] Reading file "s3://kops-statestore/stav.ek8s.com/instancegroup/nodes" I1227 16:31:20.759049 59454 s3fs.go:113] Writing file "s3://kops-statestore/stav.ek8s.com/instancegroup/nodes" I1227 16:31:20.759106 59454 s3fs.go:133] Calling S3 PutObject Bucket="kops-statestore" Key="stav.ek8s.com/instancegroup/nodes"
Update Cluster
As you can see from the output log, the changes have been store to the s3 state bucket, but have not yet updated in the kops cluster. In order to apply the update, let’s run:
amiram$ . 00-env.sh && kops update cluster \ --name $KOPS_CLUSTER_NAME \ --state $KOPS_STATE_STORE \ --logtostderr --v 2 \ --yes
Now it actually updating the cluster, and applying the new configuration to the resources.
I1227 16:34:48.405533 59588 s3context.go:163] Found bucket "kops-statestore" in region "us-east-1" I1227 16:34:48.405581 59588 s3fs.go:176] Reading file "s3://kops-statestore/stav.ek8s.com/config" I1227 16:34:48.866690 59588 s3fs.go:213] Listing objects in S3 bucket "kops-statestore" with prefix "stav.ek8s.com/instancegroup/" ... ... ... .. I1227 16:35:05.069416 59588 loader.go:357] Config loaded from file /Users/amirams/.kube/config Kops has set your kubectl context to stav.ek8s.com Cluster changes have been applied to the cloud. Changes may require instances to restart: kops rolling-update cluster
Rolling Updates
Following the output, there are changes that might require a rolling-update command. so in order to complete that update, we will have to roll the configuration to the nodes, using the following command:
. 00-env.sh && kops rolling-update cluster \ --name $KOPS_CLUSTER_NAME \ --state $KOPS_STATE_STORE \ --node-interval 30s \ --instance-group $KOPS_IG_NAME \ --logtostderr --v 2 \ --yes
The cluster will now roll that new configuration on the nodes in the cluster using a graceful draining timeout one by one and will wait for 1m30s
for pods to stabilize after draining.
I1227 16:39:55.831359 59732 s3context.go:163] Found bucket "kops-statestore" in region "us-east-1" I1227 16:39:55.831416 59732 s3fs.go:176] Reading file "s3://kops-statestore/stav.ek8s.com/config" I1227 16:39:56.719417 59732 loader.go:357] Config loaded from file /Users/amirams/.kube/config I1227 16:39:56.722515 59732 round_trippers.go:417] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kops/v1.8.1 (darwin/amd64) kubernetes/$Format" -H "Authorization: Basic YWRtaW46T2l4SHlJdDdQd1hoNEFQRFNUMU1nN3hYTER0Vk9ueWo=" https://api.stav.ek8s.com/api/v1/nodes I1227 16:39:57.143345 59732 round_trippers.go:436] GET https://api.stav.ek8s.com/api/v1/nodes 200 OK in 420 milliseconds ... .. .. .. I1227 16:44:14.190938 59732 instancegroups.go:212] Cluster validated. I1227 16:44:14.190967 59732 rollingupdate.go:191] Rolling update completed for cluster "stav.ek8s.com"!
Create a new Instance Group
If we want to create an additional instance-group for different labeling purposes, or just to distinguish between different environments, we will use the following sequence.
. 00-env.sh && kops create ig nodesmore \ --name $KOPS_CLUSTER_NAME \ --state $KOPS_STATE_STORE \ --role node \ --subnet us-east-1a \ --logtostderr --v 2
That command will open a vim terminal to verify the instance-group configuration before creating the instance group
apiVersion: kops/v1alpha2 kind: InstanceGroup metadata: creationTimestamp: null name: nodesmore spec: image: kope.io/k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-12-02 machineType: m3.medium maxSize: 2 minSize: 2 role: Node subnets: - us-east-1a
:wq!
Save and exit the terminal will trigger the instance-group creation
I1227 16:47:22.295511 59934 editor.go:127] Opening file with editor [vi /var/folders/rq/jqlbr35d6gg3_bjhwgzdjm3m0000gn/T/kops-edit-8augbyaml] I1227 16:48:42.413855 59934 s3fs.go:176] Reading file "s3://kops-statestore/stav.ek8s.com/instancegroup/nodesmore" I1227 16:48:47.824308 59934 s3fs.go:113] Writing file "s3://kops-statestore/stav.ek8s.com/instancegroup/nodesmore" I1227 16:48:47.824387 59934 s3fs.go:133] Calling S3 PutObject Bucket="kops-statestore" Key="stav.ek8s.com/instancegroup/nodesmore" SSE="AES256" ACL="" BodyLen=276
Please note, again, that it did not actually create the instance-group, it just updates the kops-state bucket with that new config. In order to trigger the instance-group creation, we should run kops update cluster
command.
. 00-env.sh && kops update cluster \ --name $KOPS_CLUSTER_NAME \ --state $KOPS_STATE_STORE \ --logtostderr --v 2 \ --yes
Now, the instance-group nodesmore
is created. We can see the logs, and notice a new Elastigroup that has been created in the Spotinst console.
I1227 16:46:50.465018 59917 s3context.go:163] Found bucket "kops-statestore" in region "us-east-1" I1227 16:46:50.465052 59917 s3fs.go:176] Reading file "s3://kops-statestore/stav.ek8s.com/config" Kops has set your kubectl context to stav.ek8s.com Cluster changes have been applied to the cloud.
That’s it!
Summary
This post explained how to manage a Kubernetes cluster on Spotinst using kops.
Try starting a cluster, create a few Kubernetes resources, add labels, instance groups and then tear it down.
Best,
Amiram.