Securing pods, and the containers that run as part of them, is a critical aspect of protecting your Kubernetes environments. Among other reasons, pods and containers are the individual units of compute that are ultimately subject to adversarial techniques that may be used as part of any attack on your Kubernetes clusters. Since pods are also the smallest resource you can deploy and manage in Kubernetes, applying security at this level ensures greater fine-grained controls that are scoped to individual application components.
Fortunately, Kubernetes provides key native capabilities that enable users to harden and secure pods. These include Kubernetes security context and security policies such as Pod Security Policies. Additional tools such as Open Policy Agent (OPA)Gatekeeper, which we have previously written about, can also be used to enforce security policies. This post explores these capabilities and how you can use them to better secure pods in your Kubernetes clusters.
Kubernetes Security Context
The starting point for understanding how pod security works in Kubernetes is what is known as a “security context,” which references specific constraints for access and permissions at the level of individual pods that are configured at runtime. These settings encompass a range of different configurations such as being able to run privileged or not, whether a container’s root filesystem should be mounted as read-only or not, access control based on UID and GID, system-level capabilities, and whether built-in Linux security mechanisms such as seccomp, SELinux, and AppArmor should be leveraged. It is also worth noting that Kubernetes has recently GA’d support for features such as seccomp.
Pod-level Security Context
The goal of these constraints are several-fold, namely to limit any given pod’s susceptibility to compromise via attacker techniques such as those described in the Kubernetes attack matrix as well as to limit the blast radius of any potential attack beyond a given set of containers.
To specify these settings for a given pod, the securityContext field must be included in the pod manifest; this references a PodSecurityContext object that saves the relevant security attributes using the Kubernetes API. Additionally, a pod-level security context will also result in settings being applied to volumes when they are mounted, where applicable, namely to match the fsGroup specified within the security context.
Container-level Security Context
Pod-level security contexts will result in constraints being applied to all containers that run within the relevant pod. But you may not always want the same settings to apply to all containers within a given pod, so Kubernetes also allows you to specify security contexts for individual containers as well. To do this, the securityContext field must be included in the container manifest. Field values of container.securityContext will take precedence over field values of PodSecurityContext, meaning the constraints for an individual container will override those specified for the pod when there is overlap or conflict. However, it also should be noted that container security contexts do not override a pod’s security context as it applies to the pod’s volumes.
Kubernetes Security Policies
Now that we have covered the concept of security context in Kubernetes, which provides the ability to declare security parameters for pods and containers that are applied at runtime, we will explore complementary features in Kubernetes that further enforce these settings. Security policies in Kubernetes are the main control plane mechanisms that can be used to centrally apply certain policies across pods throughout a cluster.
Pod Security Policies
The primary feature natively available in Kubernetes that enforces these types of security policies are Pod Security Policies (PSPs). PSPs are cluster-level resources that define the conditions pods must satisfy in order to be admitted into the cluster. PSPs are enforced using an optional Kubernetes admission controller - once enabled, any attempts to create pods that do not satisfy relevant, available, and authorized PSPs will be denied acceptance by the PSP admission controller. Similar to pod-level security context, PSPs are only applicable at the level of pods and to a subset of fields that can be configured in the pod manifest. To enforce policies on other fields in the pod specification, users have the option of writing their own validating admission controllers to supplement native PSP capabilities.
Policy Enforcement with OPA Gatekeeper
For some time, the Kubernetes community at large has also been considering alternative ways to achieve the goals enabled by Pod Security Policies. One advantage of PSPs is that they leverage a built-in admission controller. However, they also have drawbacks. As their name indicates, PSPs only apply to pods, making their coverage limited, and they also introduce complexity and overhead that must be managed for each deployment. PSPs are not nearly as flexible as other options such as OPA Gatekeeper, which provides a Kubernetes admission controller on top of the OPA policy engine to flexibly enforce policies on pods as well as other resource types. Some Kubernetes platforms, such as Azure Kubernetes Service (AKS), have opted to deprecate support for Pod Security Policies and instead implement policy enforcement using OPA Gatekeeper.
Policy Definitions
In Kubernetes, security policies are intended to adhere to a framework that defines policy types, separate from how enforcement is implemented. The three types of policy definitions are Privileged, Baseline/Default, and Restricted. Privileged policy types are intentionally unrestricted, allowing for privilege escalations with the assumption that the pods they are applied to are managed by trusted users. Baseline/Default policies seek to balance security concerns with operational ease of use by applying minimally restrictive constraints but disallowing known privilege escalations. For example, privileged pods or sharing host namespaces would be not allowed under this type of policy. Finally, Restricted policies seek to follow security best practices on pods with lesser prioritization for operational convenience or compatibility.
This framework can help provide some guidelines for you and your organization to implement security policies for pod-level configurations based on specific operational practices, needs, and other considerations.
How StackRox Helps
Hardening pods using native controls can, at times, get quite complex, especially when running clusters at scale. StackRox provides automated policy enforcement across dozens of different policies, a number of which you can easily take advantage of to secure pods and ensure configuration best practices such as only allowing non-root users and configuring read-only root file systems.
Conclusion
A critical cornerstone of any Kubernetes security strategy is to secure the pods and containers that make up your clusters. The good news is that Kubernetes itself as well as its ecosystem make available multiple types of flexible capabilities and tools that enable you to protect pods in ways that range from applying general security best practices to meeting specific, fine-grained requirements based on workload type or other needs. Kubernetes security context and security policies, including Pod Security Policies, are the best way to get started and immediately increase the security of your Kubernetes applications.
À propos de l'auteur
Wei Lien Dang is Senior Director of Product and Marketing for Red Hat Advanced Cluster Security for Kubernetes. He was a co-founder at StackRox, which was acquired by Red Hat. Before his time at StackRox, Dang was Head of Product at CoreOS and held senior product management roles for security and cloud infrastructure at Amazon Web Services, Splunk, and Bracket Computing. He was also part of the investment team at the venture capital firm Andreessen Horowitz.
Dang holds an MBA with high distinction from Harvard Business School and a BS in Applied Physics with honors from Caltech.
Parcourir par canal
Automatisation
Les dernières nouveautés en matière d'automatisation informatique pour les technologies, les équipes et les environnements
Intelligence artificielle
Actualité sur les plateformes qui permettent aux clients d'exécuter des charges de travail d'IA sur tout type d'environnement
Cloud hybride ouvert
Découvrez comment créer un avenir flexible grâce au cloud hybride
Sécurité
Les dernières actualités sur la façon dont nous réduisons les risques dans tous les environnements et technologies
Edge computing
Actualité sur les plateformes qui simplifient les opérations en périphérie
Infrastructure
Les dernières nouveautés sur la plateforme Linux d'entreprise leader au monde
Applications
À l’intérieur de nos solutions aux défis d’application les plus difficiles
Programmes originaux
Histoires passionnantes de créateurs et de leaders de technologies d'entreprise
Produits
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Services cloud
- Voir tous les produits
Outils
- Formation et certification
- Mon compte
- Assistance client
- Ressources développeurs
- Rechercher un partenaire
- Red Hat Ecosystem Catalog
- Calculateur de valeur Red Hat
- Documentation
Essayer, acheter et vendre
Communication
- Contacter le service commercial
- Contactez notre service clientèle
- Contacter le service de formation
- Réseaux sociaux
À propos de Red Hat
Premier éditeur mondial de solutions Open Source pour les entreprises, nous fournissons des technologies Linux, cloud, de conteneurs et Kubernetes. Nous proposons des solutions stables qui aident les entreprises à jongler avec les divers environnements et plateformes, du cœur du datacenter à la périphérie du réseau.
Sélectionner une langue
Red Hat legal and privacy links
- À propos de Red Hat
- Carrières
- Événements
- Bureaux
- Contacter Red Hat
- Lire le blog Red Hat
- Diversité, équité et inclusion
- Cool Stuff Store
- Red Hat Summit