Kubernetes, also known as K8s or “Kube,” is an open source container orchestration platform that automates the deployment, management, and scaling of containerized applications. Kubernetes organizes Linux containers into clusters and uses application programming interfaces (APIs) to connect containerized microservices. Since any layer or service involved in a Kubernetes deployment can present vulnerabilities, the process of securing Kubernetes clusters can be complex.
While some teams take a container-centric approach to Kubernetes security, which mainly focuses on securing container images and the container runtime, others opt for Kubernetes-native security, which takes a broader approach, pulling in context from Kubernetes and using built-in Kubernetes controls to implement risk-based security best practices across the full application development life cycle. Kubernetes-native security also addresses risks and vulnerabilities that are specific to Kubernetes, such as misconfigured Kubernetes RBAC policies, insecure Kubernetes control plane components, and misused Kubernetes secrets.
Kubernetes, as a relatively new technology, has seen tremendous adoption in recent years, but security investment hasn’t always kept up. Combined with a lack of security awareness and the ever-present skills gap, security incidents can have devastating consequences. Security issues are responsible for delaying or slowing down application development and deployment. When culminating in an incident, Kubernetes and container security issues also contribute to revenue or customer loss, employee termination, and additional impacts on business operations.
Kubernetes and containerization facilitate quicker and more scalable DevOps, but they also come with additional security risks. As more containers are deployed, your attack surface expands and the ability to pinpoint which containers have vulnerabilities or misconfigurations becomes more challenging.
Common risks & challenges
Kubernetes pod-to-pod networking
A major benefit of Kubernetes is its wide range of network configuration options, which control how pods within a cluster communicate. However, Kubernetes does not restrict network communication between pods within a cluster by default, so every pod can communicate with every other pod until a relevant network policy is assigned. This means a single pod that’s been breached by a bad actor can be used as the vector to attack every other pod in that cluster. Kubernetes network policies are written using YAML files. This is just one among many reasons why operationalizing Kubernetes network policies can be challenging, which can lead to simply foregoing network segmentation in favor of speed.
Misconfigurations, which are often caused by human error and an absence of automated security scans, present a serious risk to Kubernetes environments and can lead to breaches. Due to the dynamic nature of containers, identifying misconfigurations and maintaining a consistent security posture can be challenging. Kubernetes was developed to prioritize speed and operability, so default configurations are usually open/unrestricted. This leaves organizations susceptible to attacks.
Software supply chain issues
Security issues in the software supply chain, including vulnerable application components, insufficient access controls, a lack of a Software Bill of Materials (SBOM), CI/CD pipeline weaknesses, and inconsistent policy enforcement, are also a major concern for organizations. The sprawling software supply chains emblematic of cloud-native, Kubernetes environments, require a unique set of controls. Software supply chain security must start at the Individual Developer Environment (IDE), and extend all the way to the runtime environment. It needs to account for all the content (source code, images, artifacts), tooling (developer and security), and people involved in the supply chain. Source code analysis, access control, attestation, SBOMs are just a few of the many security considerations for software supply chain security.
Shifting security left
Closely related to software supply chain security is the challenge of shifting security to the left. Shifting security to the left is a concept that states that Kubernetes security efforts should move to the earlier stages of the container life cycle. This is a challenge because shift-left security requires developers to become security users, empowered with the knowledge and tooling to make security decisions within their workflows. However, the business benefits of shifting security to the left is tremendous. This is the primary way that Kubernetes and container security should be implemented. The more security issues are addressed at the build stage, the fewer runtime issues are likely to arise, leading to fewer project delays.
Runtime detection and response
The volume of runtime threat vectors in containerized applications running in a Kubernetes environment poses a challenge for teams tasked with detecting and responding to such issues. There are many ways for a bad actor to gain initial access to a Kubernetes environment, execute malicious code, escalate privilege, achieve persistence, evade detection, and move laterally, resulting in data deletion or exfiltration, denial of service, or resource hijacking. You can read more about this topic in further detail in our blog on the MITRE ATT&CK® framework for Kubernetes.
Kubernetes infrastructure security
The many layers of Kubernetes–from control plane components, such as the API server, kube-scheduler, kube-controller-manager, etcd, to the worker node components that run the containerized workloads–pose their own security challenges. Each of these services must be securely configured in order to provide a hardened cluster environment for applications to run on. On top of that, whether you’re running Kubernetes as a self-managed service or using a fully managed cloud service will change how you must secure the various components of Kubernetes. In self-managed environments, for example, the entirety of the control plane components are often your responsibility, in addition to the node components. When using a managed Kubernetes service, the security responsibility is shared between the service provider and you, the customer. This adds yet another challenge.
Containerization and Kubernetes have several built-in security advantages that can help teams address the risks associated with container security issues. For example:
- Containers containing security issues discovered at runtime are fixed at the build stage and redeployed, rather than updated or patched while running. Known as immutability, this feature allows for better predictability in container behavior and anomalous behavior detection. .
- Network policies can segment pods or groups of pods while admission controllers can apply policies for better governance.
- Role-based Access Control (RBAC) can assign specific permissions to users and service accounts.
- Kubernetes secrets can better safeguard sensitive data like encryption keys.
However, Kubernetes is not a security platform, so teams must operationalize risk assessment and target vulnerabilities at each layer of the Kubernetes environment and at every stage throughout the container and application life cycles. To handle Kubernetes security effectively, you must take advantage of Kubernetes-native security controls where available, while implementing best practices during the build, deploy, and runtime phases.
As a leader in open source container technology, Red Hat can help you grow your knowledge of Kubernetes security best practices and make your implementation of containers more secure. To help teams identify and address K8s security concerns more efficiently, Red Hat offers Kubernetes-native solutions that embed security into the container lifecycle and enable DevOps teams to build and deploy production-ready applications.
Kubelinter, created by StackRox and acquired by Red Hat in 2021, is an open source static analysis tool that identifies misconfigurations and programming errors in Kubernetes deployments. KubeLinter runs a series of tests to analyze Kubernetes configurations, identify errors, and generate warnings for anything that doesn’t align with security best practices.
Red Hat Service Interconnect is equipped with built-in security that scales across clusters and clouds by default while providing trusted communication links between services. Service Interconnect also allows for flexibility of development across legacy, container or Kubernetes platforms—giving your developers more options for building, modernizing and deploying your next-generation business applications.
Red Hat® Advanced Cluster Security for Kubernetes (ACS) enables organizations to securely build, deploy, and run cloud-native applications. Offered as either a self-managed or fully managed SaaS solution, ACS protects containerized workloads in all major cloud and hybrid environments and enables DevOps and InfoSec teams to operationalize security, lower operational costs, and increase developer productivity.