
You have probably been hearing a lot about Kubernetes. If your team is not already using it, you might have the question: what exactly is Kubernetes and what does it have to do with containers?
Developers using Kubernetes deploy faster and can just focus on writing applications. With Kubernetes there are two types of efficiency you can enhance: Dev efficiency and Ops efficiency. Teams running on our Kubernetes cluster love the fact that they have fewer issues to worry about. They do not need to manage infrastructure or operating systems.
Kubernetes, sometimes abbreviated as k8s, is the standardization of technologies to automate the most popular functions of cloud providers. Traditionally, infrastructure has not been automated easily. In this post, I’ll explain why Kubernetes is revolutionizing the way DevOps works.
A Natural Fit For DevOps
DevOps, as you know, is the evolution of traditional infrastructure teams. The main idea of DevOps: to marry software engineering methodology (i.e. Agile planning, source control, code review, design patterns) with traditional infrastructure technical skills. In a way: “let’s make infrastructure teams more like software teams!” The idea worked: since the creation of the DevOps concept, we have seen significant influence from software teams in how DevOps teams operate.
While software teams have been reaping the benefits of standardization of their software development platforms for many years, it was not until recently that DevOps started experiencing the same love.
In a pre-Kubernetes world, not only did the infrastructure team have to design and implement functions for things like monitoring and auto-scaling but, depending on the decisions of the software team, they would also have to have a deeper knowledge of the software stack.
A History Of The Challenges
Looking at software teams as an example, let’s walk through the challenges of a software environment with mixed usage of Java and PHP apps. We will see how the advent of standardization allowed developers to focus on writing good code.
Java was specifically designed as an object-oriented language with a companion specification for enterprise usage (JEE), created by the famous Sun Corporation. An infrastructure team working on a Java stack would most likely have to know about Tomcat, Maven, Log4j, deploying WAR bundles and JVM memory settings. PHP started as a scripting language and evolved into object-oriented glory. Teams on the PHP stack would have to know about mod_php, php-fpm workers, Apache or Nginx, and PHP Composer. Both software and infrastructure teams would spend quite a bit of time just setting up initial environments.
The release of the Java Server specification included application bundling. Java now had a clear spec with the web application archive (WAR file). PHP and other scripting languages like Ruby never really settled on a web application archive format and many engineers settled on bundles using TAR and then exploding the files upon deployment. Infrastructure teams would deploy WAR and TAR files, which are handled differently by Java Servers and Nginx, respectively. As you can see, just getting an app up and running required a decent understanding of quite a few technologies.
In the early days of containerless servers using Docker technology, software teams found containers to be an excellent way to create local development environments. The promise of isolated application containers meant that there was now a standard “application bundle” in the form of an image. However, in the early days, using Docker in Production was not trivial or stable. As infrastructure teams became DevOps teams, they put their software engineering skills to good use to try to solve the harder infrastructure problems with code.
Lots Of Ways To Do The Same Thing
In the days before containerization in Production, DevOps teams had to deal with fundamental infrastructure and design questions for every project. A few examples include deploying new code, upscaling nodes and application rollout. There is no right way to do it. Some DevOps engineers might prefer shell scripts, others might prefer make files. Ubuntu purists might prefer using operating system dependent package managers like apt-get.
Some common design questions that would eventually be eliminated by Kubernetes:
- Cloud providers offer images for their instances, like the AWS AMI. Should we bake the application bundle into an AMI? Should the AMI be vanilla and receive a new bundle?
- An EC2 instance can run commands in /user-data after the AMI is baked. How much logic do we put in /user-data when spinning up an EC2 instance?
- Deploying a bundle might require rcp or scp or use of the OS package manager. Which one do we use?
- How do you want to build the health checks to configure rolling deployments?
Each of these decisions were usually unique designs and required planning and an understanding of the application code. In most cases, these were considered large DevOps projects that took a lot of time to design and build.
Serverless Containers Made Serverless Kubernetes Possible
The serverless container is your application bundle. It’s even better than a bundle because it has its listener in the image! With this major problem solved, could there be a future where DevOps would not need to have intimate knowledge of the application code or stack in order to keep a system up and running? In this vision, as long as the application image could run in a container, DevOps could handle it.
By marrying the logic for building and running server code into creating a container image, DevOps would no longer be in the business of dealing things like with failing application builds, exploding WAR files after deployment of an application bundle or executing post-build scripts. This all happens in the build of the image, which is the single unit DevOps “receives” from the software team. Ideally, you’ll have your CI server put your application builds in your image repository.
By embracing usage of technology like Docker images, DevOps can now standardize how these images get deployed, upscaled, monitored… I think you get the idea. That’s where Kubernetes comes into play. Google has 15 years of experience using a version of Kubernetes, called Borg, in Production.
There was a common Kubernetes vs. Amazon’s Elastic Container Service (ECS) question in the past. For a while, ECS was seen as a viable alternative but DevOps teams all over the world have embraced the open source nature, pluggability and cloud-agnostic vision of Kubernetes.
What Can Kubernetes Do?
Kubernetes significantly reduces or eliminates custom algorithms for the following DevOps functions:
- setting environment variables
- creating clusters
- server monitoring
- load balancing
- node scaling (upscaling, downscaling, auto-scaling)
- auto-failover
- rolling deployments
Kubernetes markets itself as a “container-centric management environment [that] orchestrates computing, networking, and storage infrastructure on behalf of user workloads”. Because it is not a platform as a service, you can make use of any combination of its features and even plug-in additional optional features. Kubernetes, itself, is software.
New Terminology
According to the Kubernetes docs, while the technology helps you orchestrate containers, it actually obviates the need for container orchestration because the sequence of events matters less.
In the Kubernetes world, a Deployment is the smallest component in the object model. Your container, your application, is a Deployment in a Pod. Your Pod might be configured to have multiple containers, for example when a microservice is dependent on another microservice. When thinking about how Kubernetes handles application deployments, the Pod is the fundamental unit. Each Pod is assigned a unique IP address. Pods can be organized in logical groups and exposed under a single IP address as a Service. Sometimes Services are used to create external load balancers.
Pods run on worker Nodes; these are your physical or virtual machines. In the AWS world, you’ll define Node Groups for EC2 instances. Nodes are grouped into Clusters.
Conclusion
Embracing Kubernetes on your DevOps team is a great idea. As long as your software team is containerizing apps, you can start embracing these new standards quickly. The time saved from not having to reinvent the wheel will be quite measurable. Products like SpotInst Ocean can help you begin transitioning your infrastructure to Kubernetes with great cost savings. By taking advantage of Spot Instances, Ocean gives you all of the advantages of using Google, Amazon or Microsoft cloud providers with Kubernetes controllers, all hosted on their cost-saving Spot Instance products.