Kubernetes lets you create, deploy, manage, and scale application containers across one or more host clusters.
Environments running Kubernetes consist of the following key components:
In this article, you will learn:
A Kubernetes cluster has two main components—the control plane and data plane, machines used as compute resources.
Kubernetes nodes can run on regular compute instances or low cost spot instances - learn more in our guide to Kubernetes spot instances
Image Source: Kubernetes
A control plane serves as a nerve center of each Kubernetes cluster. It includes components that can control your cluster, its state data, and its configuration.
The Kubernetes control plane is responsible for ensuring that the Kubernetes cluster attains a desired state, defined by the user in a declarative manner. The control plane interacts with individual cluster nodes using the kubelet, an agent deployed on each node.
Here are the main components of the control plane:
Provides an API that serves as the front end of a Kubernetes control plane. It is responsible for handling external and internal requests—determining whether a request is valid and then processing it. The API can be accessed via the kubectl command-line interface or other tools like kubeadm, and via REST calls.
This component is responsible for scheduling pods on specific nodes according to automated workflows and user defined conditions, which can include resource requests, concerns like affinity and taints or tolerations, priority, persistent volumes (PV), and more.
The Kubernetes controller manager is a control loop that monitors and regulates the state of a Kubernetes cluster. It receives information about the current state of the cluster and objects within it, and sends instructions to move the cluster towards the cluster operator’s desired state.
The controller manager is responsible for several controllers that handle various automated activities at the cluster or pod level, including replication controller, namespace controller, service accounts controller, deployment, statefulset, and daemonset.
A key-value database that contains data about your cluster state and configuration. Etcd is fault tolerant and distributed.
This component can embed cloud-specific control logic - for example, it can access the cloud provider’s load balancer service. It enables you to connect a Kubernetes cluster with the API of a cloud provider. Additionally, it helps decouple the Kuberneters cluster from components that interact with a cloud platform, so that elements inside the cluster do not need to be aware of the implementation specifics of each cloud provider.
This cloud-controller-manager runs only controllers specific to the cloud provider. It is not required for on-premises Kubernetes environments. It uses multiple, yet logically-independent, control loops that are combined into one binary, which can run as a single process. It can be used to add scale a cluster by adding more nodes on cloud VMs, and leverage cloud provider high availability and load balancing capabilities to improve resilience and performance.
Nodes are physical or virtual machines that can run pods as part of a Kubernetes cluster. A cluster can scale up to 5000 nodes. To scale a cluster’s capacity, you can add more nodes.
A pod serves as a single application instance, and is considered the smallest unit in the object model of Kubernetes. Each pod consists of one or more tightly coupled containers, and configurations that govern how containers should run. To run stateful applications, you can connect pods to persistent storage, using Kubernetes Persistent Volumes—learn more in the following section.
Learn more in our detailed guide to the Kubernetes pod
Each node comes with a container runtime engine, which is responsible for running containers. Docker is a popular container runtime engine, but Kubernetes supports other runtimes that are compliant with Open Container Initiative, including CRI-O and rkt.
Each node contains a kubelet, which is a small application that can communicate with the Kubernetes control plane. The kubelet is responsible for ensuring that containers specified in pod configuration are running on a specific node, and manages their lifecycle.. It executes the actions commanded by your control plane.
All compute nodes contain kube-proxy, a network proxy that facilitates Kubernetes networking services. It handles all network communications outside and inside the cluster, forwarding traffic or replying on the packet filtering layer of the operating system.
Container networking enables containers to communicate with hosts or other containers. It is often achieved by using the container networking interface (CNI), which is a joint initiative by Kubernetes, Apache Mesos, Cloud Foundry, Red Hat OpenShift, and others.
CNI offers a standardized, minimal specification for network connectivity in containers. You can use the CNI plugin by passing the kubelet --network-plugin=cni command-line option. The kubelet can then read files from --cni-conf-dir and use the CNI configuration when setting up networking for each pod.
Containers are designed as immutable entities. Once a container is shut, all the data created during the container’s lifetime is lost. While this stateless characteristic is ideal for some applications, many use cases require preserving and sharing information.
You can set up Kubernetes persistent storage to allow applications to consume and request storage resources. You can do this by using volumes, which serve as basic components of a Kubernetes storage architecture.
PersistentVolumes (PVs) are storage resources designed to enable durable storage for containerized applications in Kubernetes. Each PV is a persistent storage component within the Kubernetes architecture.
PV resources belong to the cluster but exist independently of pods. To ensure statefulness, each disk and data represented by PVs continue existing even as changes occur to the cluster, regardless of deletion and recreation of pods.
There are two ways to create PVs—manually and dynamically. Dynamic creation of PVs involves the use of PersistentVolumeClaims (PVCs) that define the details of a resource request, letting Kubernetes manage the lifecycle of PVs.
Learn more in our detailed guide to Kubernetes Persistent Volumes
With Kubernetes, Helm serves as a package manager, working similarly to npm in Node.js and yum in Linux. Helm deploys charts as complete and packaged Kubernetes applications, which include pre-configured versioned application resources. It is possible to deploy different chart versions by using different configuration sets.
Helm plays a key role within the Kubernetes architecture. It can, for example, significantly improve productivity and deployment, and reduce the complexity of Kubernetes applications. You can leverage Helm charts to easily manage cloud-native applications and microservices.
Learn more in our detailed guide to Kubernetes helm
According to Gartner, the following best practices can help you architect effective Kubernetes clusters:
Spot Ocean from Spot by NetApp frees DevOps teams from the tedious management of their cluster’s worker nodes while helping reduce cost by up to 90%. Spot Ocean’s automated optimization delivers the following benefits:
Learn more about Spot Ocean today!
Together with our content partners, we have authored in-depth guides on several other topics that can also be useful as you explore the world of CI/CD.
Authored by NetApp
Authored by Codefresh
Authored by Bright Security
Complete access
for up to 20 instances