Log in / Register Account
Jump to section

How to handle Kubernetes security

Copy URL

Handling Kubernetes security requires remediating known security vulnerabilities during the build phase, reconfiguring misconfigurations during the build/deploy phase, and responding to threats at runtime. These correspond to responses gathered as part of the latest State of Kubernetes and Container Security report, which found 24% of serious container security issues were vulnerabilities that could be remediated, nearly 70% were misconfigurations, and 27% were runtime security incidents.

Containers are everywhere

Kubernetes is an open source container orchestration platform used to manage hundreds (sometimes thousands) of Linux® containers batched into clusters. It relies heavily on application programming interfaces (APIs) connecting containerized microservices. This distributed nature makes it difficult to quickly investigate which containers might have vulnerabilities, may be misconfigured, or pose the greatest risks to your organization.

The solution is to develop a comprehensive view of container deployments that captures critical system-level events in each container.

Images and registries can be misused

Container images (also known as base images) are immutable templates used to create new containers. Newly copied container images can then be modified to serve distinct purposes.

The solution is to set up policies determining how images are built, and how they’re stored in image registries. Base images need to be regularly tested, approved, and scanned. And only images from allowed image registries should be used to launch containers in a Kubernetes environment.

Uninhibited container communication

Containers and pods need to talk to each other within deployments, as well as to other internal and external endpoints to properly function. If a container is breached, the ability for a hacker to move within the environment is directly related to how broadly that container can communicate with other containers and pods. In a sprawling container environment, implementing network segmentation can be prohibitively difficult given the complexity of configuring such policies manually.

The solution is to track traffic moving between namespaces, deployments, and pods; and determine how much of that traffic is actually allowed.

Default container network policies

By default, Kubernetes deployments do not apply a network policy to a pod—the smallest unit of a Kubernetes application. These network policies behave like firewall rules. They control how pods communicate. Without network policies, any pod can talk to any other pod. 

The solution is to define network policies that limit pod communication to only defined assets, and to mount secrets in read-only volumes within containers instead of passing them as environment variables.

Container and Kubernetes compliance

Cloud-native environments facilitated by Kubernetes should (like all other IT environments) comply with security best practices, industry standards, benchmarks, and internal organizational policies—and prove that compliance. Sometimes this means adapting compliance strategies so Kubernetes environments meet controls originally written for traditional application architectures.

The solution is to monitor for compliance adherence and automate audits.


Kubernetes is an immutable infrastructure. Patching isn't possible during container runtime—running containers must be destroyed and recreated. Compromised containers can run malicious processes, like crypto mining and port scanning.

The solution is to destroy any breached or running container, rebuild an uncompromised container image, and then relaunch it.

Kubernetes security begins in the build phase by creating strong base images and adopting vulnerability scanning processes.

  • Use minimal base images. Avoid using images with operating system (OS) package managers or shells—which could contain unknown vulnerabilities—or remove the package manager later.

  • Don’t add unnecessary components. As a rule of thumb, common tools can become security risks when included in images.

  • Use up-to-date images only. Update component versions.

  • Use an image scanner. Identify vulnerabilities within images—broken down by layer.

  • Integrate security into CI/CD pipelines. Automate a repeatable facet of security that will fail continuous integration builds and generate alerts for severe, fixable vulnerabilities.

  • Label permanent vulnerabilities. Add known vulnerabilities that can’t be fixed, aren’t critical, or don’t need to be fixed right away to an allow list. 

  • Implement defense-in-depth. Standardize policy checks and remediation workflows to detect and update vulnerable images.

Configure Kubernetes infrastructure security before workloads are deployed. That begins by knowing as much as possible about the deployment process, such as what’s being deployed (image, components, pods), where it’s deployed (clusters, namespaces, and nodes), how it’s deployed (privileges, communication policies, applied securities), what it can access (secrets, volumes), and the compliance standards.

  • Use namespaces. Separating workloads into namespaces can help contain attacks, and limit the impact of mistakes or destructive actions by authorized users.

  • Use network policies. Kubernetes allows every pod to contact every other pod by default, but network segmentation policies can override that default.

  • Restrict permissions to secrets. Only mount secrets that deployments require.

  • Assess container privileges. Provide only the capabilities, roles, and privileges that allow the container to perform its function. 

  • Assess image provenance. Use images from known registries.

  • Scan deployments. Enforce policies based on the scans’ results. 

  • Use labels and annotations. Label or annotate deployments with the contact information of the team responsible for a containerized application to streamline triage.

  • Enable role-based access control (RBAC). RBAC controls user and service account authorization to access a cluster’s Kubernetes API server.

Proactive security approaches during build and deployment can reduce the likelihood of security incidents at runtime, but identifying and responding to runtime threats requires continually monitoring process activity and network communications.

  • Use contextual information. Use the build and deploy time information in Kubernetes to evaluate observed vs. expected activity during runtime in order to detect suspicious activity.

  • Scan running deployments. Monitor running deployments for the same recently discovered vulnerabilities discovered in container images.

  • Use built-in controls. Configure the security context for pods to limit their capabilities.

  • Monitor network traffic. Observe and compare live network traffic to what Kubernetes network policies allow to identify unexpected communication.

  • Use allow lists. Identify processes executed during the normal course of the app’s runtime to create an allow list.

  • Compare runtime activity in similarly deployed pods. Replicas with significant deviations require investigation.

  • Scale suspicious pods to zero. Use Kubernetes native controls to contain breaches by automatically instructing Kubernetes to scale suspicious pods to zero, or destroy and restart instances.

Security extends beyond images and workloads. Security includes the entire Kubernetes infrastructure: clusters, nodes, the container engine, and even clouds.

  • Apply Kubernetes updates. Updating your Kubernetes distro will apply security patches and install new security tools.

  • Configure the Kubernetes API server. Disable unauthenticated/anonymous access and use TLS encryption for connections between kubelets and the API server.

  • Secure etcd. etcd is a key-value store used by Kubernetes for data access. Secure the kubelet. Disable anonymous access to the kubelet by starting the kubelet with the --anonymous-auth=false flag, and use the NodeRestriction admission controller to limit what the kubelet can access.

Cloud security

Regardless of what type of cloud (public cloud, private cloud, hybrid cloud, or multicloud) hosts the containers or runs Kubernetes, the cloud user—not the cloud provider—is always responsible for securing the Kubernetes workload, including:

  • Container images: Sources, contents, and vulnerabilities

  • Deployments: Network services, storage, and privileges

  • Configuration management: Roles, groups, role bindings, and service accounts

  • Application: Secrets management, labels, and annotations

  • Network segmentation: Network policies in the Kubernetes cluster

  • Runtime: Threat detection and incident response

Using containers and Kubernetes doesn’t change your security goals: to reduce vulnerabilities.

  • Embed security early into the container lifecycle. Security should allow developers and DevOps teams to confidently build and deploy applications that are production-ready.

  • Use Kubernetes-native security controls. Native controls keep security controls from colliding with the orchestrator

  • Let Kubernetes prioritize remediation.

Keep reading


How OpenShift enables container security

Red Hat® OpenShift® can apply security controls to the software supply chain, improving the security of applications without reducing developer productivity. 


What is container security?

Container security is the protection of the integrity of containers. This includes everything from the applications they hold to the infrastructure they rely on.


What are Kubernetes patterns?

A pattern describes a repeatable solution to a problem. Kubernetes patterns are design patterns for container-based applications and services.  

Start using an enterprise Kubernetes platform

Red Hat OpenShift

An enterprise-ready Kubernetes container platform.