In this post: 

  • Learning about application context including metadata and attributes that live in Kubernetes artifacts.

  • Discussion about adding context to applications to help address security issues that may arise during the application's lifecycle. 

  • Understand the value of image immutability in dealing with and helping prevent security issues in running applications.


This is the second of a two-part series discussing DevSecOps best practices for Kubernetes environments and applications. Part one explains how to embed DevSecOps practices into developer workflows, protect the software supply chain, and transform developers into security users. Part two explores how you can implement DevSecOps principles to improve Kubernetes security analysis and remediation across the full development life cycle.

DevOps context drives better, faster security analysis and decision-making

Organizations spend significant resources to respond to and resolve security incidents. This only gets harder when considering that containerized applications are dynamically orchestrated, scale to large numbers, and are ephemeral in nature. As discussed in the previous post, the software supply chain acts as a centralized place to make any software changes that will propagate through the rest of the application life cycle and into production environments. 

The software supply chain also serves as a chokepoint where users can incorporate additional application context that can be highly valuable when security issues arise later in the application life cycle, at runtime. A DevSecOps approach to Kubernetes security utilizes this context to speed investigations of security incidents and remediation of their underlying issues. 

Application context in Kubernetes can take many forms including metadata and attributes that are added to Kubernetes artifacts such as manifests and images (for example, immutable tags or cryptographic signatures). This context from Kubernetes may include information across a deployment about: 

  • What processes will execute within containers.

  • Whether any resource limits exist.

  • The system-level privileges and capabilities granted to individual containers.

  • Whether the container’s root filesystem is read-only or not.

  • What block devices and secrets are present.

It also may include valuable metadata such as labels and annotations. Labels help establish what an application is, while annotations add descriptions about the application. 

Evaluating this context across pod replicas can produce a quicker, clearer understanding of application baselines representing the expected behavior of running containers, such as network traffic patterns or execution of specific container processes. This leads to a more accurate anomaly and threat detection, reduces alert fatigue, and eliminates manual workflows. 

By taking a DevSecOps approach that incorporates and analyzes application context introduced in the software supply chain alongside security and policy violations at runtime, security and engineering teams can more quickly determine risk levels and remediation prioritization.

Bringing the best of DevOps and Security together with full life cycle policies

DevSecOps represents practices that integrate security end-to-end from the time that applications are built through the time they are running in production. This requires that the security policy frameworks are able to incorporate criteria across the entire application life cycle. 

Teams need to consider the feasibility, ease of operation, and overall impact on DevOps processes and workflows as part of complying with and enforcing these policies on an ongoing basis. Ideally, a DevSecOps model empowers individuals to operate autonomously in applying these policies across an organization’s environment that has many applications deployed and running in multiple production clusters. 

For containerized applications, policies should be enforced on images to fail builds based on parameters such as vulnerabilities or on whether unnecessary tooling and packages are embedded within the images. Policies also need to be enforced to restrict the deployment of pods if their levels of access or permissions exceed what is necessary; examples include pods that run privileged containers or mount container filesystems that have both read and write access. 

Containers1-Blog-thumbnail.png Finally, running applications should be subject to policies if runtime activity deviates from what is expected, such as if there is an execution of a known malicious process or an unusual network communication attempt outside of typical patterns. 

As application owners, DevOps teams should set policies that incorporate security criteria across different phases of the application life cycle (build, deploy, run) that match how these applications are intended to operate. This approach allows important aspects of security to be encapsulated within end-to-end policies that all stakeholders can monitor rather than working with policies that are implemented in isolation by engineering and security teams.

Cattle versus pets: how immutability enables policy enforcement when things go wrong

A core tenet of cloud-native software is that infrastructure and applications should be considered immutable. Once running, they are not updated nor patched. Rather any changes are made at their source, whether an image or configuration file or something else, and these components should be torn down and subsequently re-deployed. 

In parallel, DevOps principles emphasize fast iteration, frequent updates, and high release velocity. Taken together, these can enable a DevSecOps operational model for scalable, orchestrated security enforcement. 

In Kubernetes environments, a DevSecOps approach to policy enforcement is best realized by leveraging the orchestration system — Kubernetes itself — to carry out enforcement actions such as killing pods, preventing containers from being launched, or restricting system-level activities that an application is allowed to perform. This approach minimizes operational risk to running applications, improves scalability, and eliminates the need for DevOps teams to run and maintain additional tooling.

Enforcement should also not be viewed as applicable to runtime only. In Kubernetes environments, organizations can implement multiple points of enforcement across the application life cycle: 

  • Throughout CI/CD pipelines.

  • At deployment time using Kubernetes admission controllers such as Open Policy Agent Gatekeeper.

  • At runtime.

This allows users to “gate” certain activities based on potential security issues and make necessary changes as early as possible. This type of DevSecOps-friendly approach to policy enforcement was previously not possible when infrastructure platforms lacked the security controls that exist natively in Kubernetes such as: 

  • Network Policies for network segmentation.

  • Admission Controllers for intercepting and possibly rejecting requests to the Kubernetes API server.

  • Secrets for storing sensitive credentials.

  • Role-based access control for granting authorization to users and services accounts.

  • Security contexts, Open Policy Agent, and support for Linux security modules such as seccomp for setting system-level constraints at the granularity of individual containers.

By utilizing this rich set of controls, security and engineering teams can apply DevSecOps practices to achieve a faster, more iterative, fine-grained framework for enforcing security policies in Kubernetes environments.

Closing the loop: streamlining remediation across the full life cycle

When engineering and security teams adopt a mindset that infrastructure and applications are immutable, traditional approaches to incident response and remediation are often rendered obsolete. 

Security operators may no longer be the primary people responsible for remediation. Instead, since changes to address root causes must be made upstream in image builds, DevOps teams must increasingly focus on remediating vulnerabilities, misconfigurations, and other sources of security incidents. Therefore, applying DevSecOps to cloud-native environments requires that DevOps users tackle significant aspects of response and remediation. 

However, such a DevSecOps-driven approach to remediation requires that new workflows and practices be established for DevOps users to effectively accomplish remediation goals. These users must be given a clear prioritization framework for security issues that arise and desired changes to be made must be clearly communicated. 

Security and engineering stakeholders need to collaborate to outline prioritization criteria that correspond to risk levels associated with each security issue. These can be applied to filter issues in priority order to guide DevOps users towards those issues requiring immediate attention. 

For example, a vulnerability that requires certain privileges to exploit may or may not be ranked as a high priority depending on whether those privileges exist for containers in a given application. Or, a vulnerability that is exploited by writing to a container’s filesystem may or may not be considered critical if the filesystem is configured as read-only. 

One fundamental workflow that can promote efficient collaboration regarding remediation is to configure alerts and notifications on security issues to be delivered to specific DevOps teams based on the impacted application. 

As an example, container images are typically made up of multiple layers, with contributions made from multiple individuals or teams, and often built from a base operating system image owned by a particular team. In this scenario, alerts on vulnerabilities or issues with components that exist in particular layers can be directed in a targeted manner to the team that owns either the base image, a specific image layer, or certain open source components. 

Another example concerns compliance. If checks against industry-standard benchmarks such as PCI or HIPAA fail, then DevOps teams must be given clear remediation guidance to resolve outstanding issues and ensure their applications conform to compliance requirements.

Conclusion

The changes introduced by cloud-native technologies invites organizations to evolve their security toward a DevSecOps model. This means security and engineering teams must work together to develop strategies to implement: 

  • “Shift left” practices that incorporate security earlier in the software development life cycle.

  • Workflows that implement “security as code.”

The goal of an effective cloud-native security strategy is to allow teams to achieve greater levels of software delivery while building more secure systems. DevOps and software engineers stand to greatly improve security functions, in collaboration with security teams that specify policies for tooling, processes, and metrics.

Learn more about how Kubernetes can support a DevSecOps culture in the DevSecOps in Kubernetes whitepaper.


About the author

Wei Lien Dang is Senior Director of Product and Marketing for Red Hat Advanced Cluster Security for Kubernetes. He was a co-founder at StackRox, which was acquired by Red Hat. Before his time at StackRox, Dang was Head of Product at CoreOS and held senior product management roles for security and cloud infrastructure at Amazon Web Services, Splunk, and Bracket Computing. He was also part of the investment team at the venture capital firm Andreessen Horowitz.

Read full bio