The National Institute of Standards and Technology (NIST) Special Publication on Container Security provides a comprehensive review of the major risks for core components of a container system. One of the most obvious objects of concern (alongside a host of other things to keep a good security professional up at night) is, of course, the containers running on your platform.
Container security is obviously critical to your applications and data. It’s a complex topic, but one way to think about it is to break it down into three component parts, and a simple mantra:
Build secure containers, pull them from a trusted registry and deploy with ‘least privilege’, then protect them at runtime.
While this might be a gross oversimplification (as befits a 1,000-word blog), it does provide us with a starting framework to assess the controls and assurance we have in place when it comes to container images.
Building secure containers
Container images are built from layers, with a readable/writable layer containing specific code or configuration (your HTML assets, custom binaries, python scripts etc) sitting on top of a stack of read-only-layers containing the base image, software tools, and libraries.
Monitoring for vulnerabilities
Any one of these layers – from the base image to the libraries and packages you add, can contain vulnerabilities. Some might be trivial, some might be critical. When we analyze containers using Prisma Cloud, it’s not uncommon to find multiple vulnerabilities in a single container image.
There are obviously other misconfigurations you need to be aware of, like storing private keys or other secrets in the container image, or running an SSH daemon.
The good news is that container images are scannable, therefore the layer manifest can be parsed and the components assessed against known vulnerabilities. This process should be part of all container build CI/CD pipelines, so that containers stored in registries have at least a known vulnerability state.
Since new vulnerabilities in old software are routinely found, scanning needs to be an ongoing process with routine reassessment of container images against the latest known vulnerabilities.
Infrastructure as Code
It’s also important to assess how a container is going to be deployed at runtime (a topic we will touch on later in this blog). You don’t want your containers running as root, sharing networks with hosts, or making insecure file system mounts, for example. These attributes are defined in Infrastructure as Code (IaC) files, like a Kubernetes YAML file or an AWS Cloud Formation Template.
Since IaC files are increasingly written by developers who also write the application code, giving them tools to scan their IaC field before committing them makes a lot of sense, but don’t stop scanning them as part of your CI/CD build workflow.
The end result of your CI/CD pipeline should be a container that, as well as performing the required function, has a level of security inspection and testing such that it can be, to an extent, trusted.
Trusted registries and secure deployments
NIST SP 800-190 states:
“Organizations should maintain a set of trusted images and registries and ensure that only images from this set are allowed to run in their environment, thus mitigating the risk of untrusted or malicious components being deployed.”
Kubernetes will, however, generally run any container image, from any accessible container registry. So merely establishing an organization-specific container registry isn’t going to be sufficient.
Trusted Containers and Registries
You need to establish mechanisms to prevent untrusted containers being run within your cluster, by specifying trusted registries, or even trusted images. This can be done using an admission controller and associated rules, or as part of a more comprehensive container security runtime defense solution like Prisma Cloud.
Stopping untrusted containers at runtime, rather than just during the build process, is important. A common component of an attack is to compromise a running container, then use the pod’s service account and API server access to start additional malicious pods within the cluster.
This concept of identity and access leads into the next key area.
There are a number of aspects to secure container deployments, which could be generalized as how a container interacts with the local runtime – the container engine, underlying compute node and network, and how it interacts with the Kubernetes control plane – via the API.
Since configuring how a container interacts with the runtime is configured largely via the API there is something of a circular argument here, but as ever in security, we are aiming to practice ‘defense in depth’.
In our build phase, if we have properly inspected the container image and IaC files, and then configured our container platform to only run images from trusted registries, then a lot of threats should be mitigated, but that’s no replacement for real-time defense.
Since not everyone will be able to enforce such strict rules (although, really they should), following good ‘zero-trust’ practices, run-time defense for your container platforms should still be a key part of your armour. Prisma Cloud, for instance, deploys specialized pods onto cluster nodes which scan and analyze the composition and configuration of a container on a node at startup, and alert when policies are violated.
So now that we have used controls to ensure container security during the build and deployment phases, it’s time to think about what happens when they are running in production.
Protecting against application and network attacks
Any service connected to a network is likely to be attacked, irrespective of whether it’s internet-facing or not. Plan to protect running Kubernetes pods from network and application layer attacks by deploying container-level monitoring to detect anomalies such as new connections, processes, or file system access.
Deploying a mix of next generation firewalls and a cloud native security platform to protect and monitor your Kubernetes nodes and pods gives you real-time detection and blocking capabilities for a range of malware and application layer threats.
One final point is to properly secure the underlying infrastructure of your Kubernetes deployment.
With your container’s journey from code to production protected and secured, you can feel more confident that your applications and data are safe (ok, well maybe less at-risk).