Kubernetes (k8s) is an open source platform. You can use it to automate, scale, and manage container workload distribution. K8s is ranked amongst the most widely used container orchestration tools. Many Kubernetes users run it in a public cloud, such as Microsoft Azure.
You can use Azure resources for Kubernetes without worrying about lock-in. With containerization simplifying the process of lifting and shifting older applications from on-site to the cloud or from one cloud provider to Azure, you can shift Kubernetes workloads to other clouds relatively easily, and can create a hybrid cloud or multi cloud deployment.
In this article, you will learn:
- What is Azure Kubernetes?
- Tools and Services for Running Kubernetes in Azure
- How to Deploy an Azure Kubernetes Cluster
- Best Practices for Operating Kubernetes on Azure
Tools and Services for Running Kubernetes in Azure
These are the top three Azure products to use with Kubernetes. Azure also provides many integrations with third-party solutions that can help you manage Kubernetes workloads.
Azure Kubernetes Service (AKS)
AKS is a Kubernetes service managed by Azure. To use AKS, you only need to specify the number of worker nodes to use, and configure the options that apply to that node. Azure sets up and manages the Kubernetes control control plane. This service itself is free, but you must pay for your use of virtual machines, storage, and network resources.
AKS includes high availability clusters, automated upgrades, and event-based auto scaling. AKS is natively integrated with Azure services, including Load Balancer, Search, Monitor, Visual Studio Code, and Azure DevOps.
Visual Studio Code
Visual Studio Code is a code editor for Windows, Linux, and Mac OS, free for personal or commercial use. VS Code offers smart code completion, debugging, git, and various extensions. For example, you can use the Kubernetes Tools extension within VS Code to create helm charts and Kubernetes manifests. It can also help you troubleshoot live applications running on Kubernetes.
You can use VS Code to deploy your application to self-hosted Kubernetes, or directly to a managed cluster on AKS. To deploy to AKS, you’ll need to have the Azure Command Line Interface (CLI) installed on your development workstation.
Azure Dev Spaces
- Simplifies unit testing by eliminating the need for cloning and dependency modeling
- Eases collaboration, allowing you to run and debug AKS containers as a team and share cluster environments
- Supports both the Azure CLI or Visual Studio code
How to Deploy an Azure Kubernetes Cluster
You can define Kubernetes role-based access control (RBAC) for AKS clusters. By assigning multiple roles to users, permissions are extended, allowing you to set permissions for a single namespace or for an entire cluster. The Azure CLI provides RBAC by default whenever you create an AKS cluster.
Creating an AKS cluster
The following tutorial shows how you can create a cluster, which will be called myAKSCluster, and placed in a resource group called myResourceGroup. You don’t need to create an Azure Active Directory service, because it is created automatically to enable communication between the AKS cluster and other Azure resources.
In this example, the service principal is given permission to download images from Azure Container Registry (ACR).
az aks create \ --resource-group myResourceGroup \ --name myAKSCluster \ --node-count 2 \ --generate-ssh-keys \ --attach-acr <acrName>
An alternative is to manually set up a service principal to retrieve ACR images.
The deployment should complete in a matter of minutes, and the AKS deployment information should be returned in JSON format.
Installing the Kubernetes command line interface
To create a connection with the Kubernetes cluster on your local machine, you can use the kubectl command-line client.
In Azure Cloud Shell, kubectl should be installed without any manual configuration. If you want to install locally, you can do so using the following command:
az aks install-cli
Connect to the cluster using kubectl
Use the az aks-credentials command to configure Kubectl to establish a connection to the Kubernetes cluster.
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
$ kubectl get nodes
Best Practices for Operating Kubernetes on Azure
AKS Cluster Performance Resource Requests and Limits
By defining pod requirements and limitations, you can better balance workloads in your Kubernetes clusters. Kubernetes uses requirements and limits to manage the amount of cluster resources (for example, CPU and memory).
When a pod is requested, Kubernetes is informed about the amount of resources needed and schedules those resources automatically. Limits are used by Kubernetes to control and restrict resources allocated within the cluster for an individual pod.
If your pod has no requirements or limits, when sharing clusters with other groups or applications, you can specify resources in the namespace. ResourceQuota is an object created to request and limit resources for all pods in a specific namespace. Alternatively, you can apply the default values to all pods in a container, by using the LimitRange object.
After calculating and determining the amount of resources required for each pod, you’ll need to determine how many workers the cluster will need. It is recommended to select a node configuration with the smallest amount of resources that can still handle the workloads. Over-resourced nodes will incur unnecessary costs on Azure and limit your scalability.
You can then tag the nodes and assign them to specific tasks. For example, you can use node affinity to place pods on a node with SSD storage, or ensure that pods run together in the same node. Adjust taint or toleration to avoid scheduling pods in specific cluster nodes.
Kubernetes clusters should run close to end users, even if cluster operators are in a different location. If you have users in multiple locations, it is recommended that you have a cluster at each location.
This type of architecture not only reduces latency, but can also help with high availability in the event of a zone failure. In Azure, the best option is typically to select two paired regions (regions physically near each other). Azure prioritizes disaster recovery and maintenance work in paired regions, so that at least one of the paired regions continues operating.
You can also use the Traffic Manager service to direct traffic between AKS clusters. Traffic can be redirected to improve latency, minimize geographic distance, or as a response to downtime. The user accesses a DNS endpoint, is routed to the Traffic Manager, and then the Traffic Manager provides the most appropriate AKS endpoint.
Kubernetes workloads may be stateful, requiring persistent storage, or stateless. In either case, using the correct type of storage can improve the performance of AKS clusters. Some operations require storage even in a stateless environment, for example pulling images from a container repository.
Use solid state drives (SSD) in production environments. If you want to have multiple concurrent connections, use NAS. Other storage options on Azure include Azure Files, Azure Managed Disk (SSD), dysk (a persistent storage service built into AKS), or blobfuse (a virtual file system). Keep in mind that each node has a limit to the number of disk drives that can be connected to it.
The size of the node also determines overall storage performance in the cluster. Keep in mind that Azure provides different VM sizes that have the same CPU and memory resources, but different storage configurations.
Kubernetes serves as a solid foundation for container orchestration, and since it’s open-source and has become an industry standard for cloud-native operations, there are many third-party offerings. Notable k8s services and tools in Azure are AKS, Visual Studio Code, and Azure Dev Spaces.
As explained above, deploying an AKS cluster is relatively simple. However, you should carefully define resource requests and limits, to ensure optimal performance. Remember to determine workers appropriately, and choose the storage type that fits your operation best. For optimal billing and performance, keep an eye on your resources and optimize continuously.