What is container security?

Copy URL

Container security is the process of safeguarding containerized applications from malware and other vulnerabilities. It involves defining and adhering to build, deployment, and runtime practices that protect your Linux container―from the applications it supports to the infrastructure it relies on. 

As organizations adopt microservice design patterns and container technologies—such as Docker and Kubernetes—security teams are challenged to develop container security solutions that facilitate these infrastructure shifts. Container security needs to be integrated and continuous and support an enterprise’s overall security posture. 

The container orchestrator (namely Kubernetes) plays a critical role in container security, and offers access to rich contextual data for better visibility and compliance, context-based risk profiling, networking, and runtime detection. Effective container security builds on Kubernetes constructs, such as deployments, pods, network policies, and so on. Kubernetes network policies, for example, is a built-in security feature that should be used to control pod-to-pod communication and minimize an attacker’s blast radius.

In general, continuous container security for the enterprise is about:

  • Securing the container pipeline and the application
  • Securing the container deployment environment(s) and infrastructure
  • Securing the containerized workloads at runtime

Find out how companies are implementing container security initiatives.

Get the State of Kubernetes Security Report

In traditional software development, a security review can be a final series of tests at the end of development. But with modern cloud-native development workflows, the attack surface is much greater, and security becomes a more complex problem. In cloud-native environments, where containers are the standard application delivery format, code is updated frequently and ingested from multiple repositories. Human error, such as misconfigurations, can open the door to unauthorized access at many points in the development and deployment cycle. Security vulnerabilities can emerge from practically anywhere. For this reason, security must be a continuous process.

Just as container deployment is handled with automation (using container orchestration tools like Kubernetes), security has to be automated as well. Using DevSecOps principles (a concept created to add a security emphasis to DevOps), code can be vetted and checked continuously throughout the development cycle. Vulnerabilities can be discovered and remediated early and quickly, rather than being overlooked until they emerge as time-consuming surprises. Because containers are immutable, container security means patching code at the build stage, not while running, so vulnerabilities don’t reemerge when containers are destroyed and rebuilt.

Scanning container images for malware and other security vulnerabilities is a critical step—and should be one of several layers of security. Organizations need to give consideration to the security of the entire software supply chain—in other words, all of the steps in the development and deployment of containerized software, including dependencies and runtime environments. 

Here are a few specific strategies for containerized development that take supply chain security into account:

  • Trusted content and an enterprise-grade content repository deliver pre-hardened images with advanced security and access controls.
  • A Zero Trust approach assigns the lowest access levels possible to critical resources.
  • Policy as Code embeds security controls directly in the CI/CD pipeline.
  • Signing and verifications enforces attestation and establishes trust by verifying that container images haven’t been tampered with.
  • GitOps practices help manage application and container security configurations.
Learn more about Red Hat® Trusted Software Supply Chain

Red Hat Resources

Gather images

Containers are created out of layers of files called container images. 

A tool like Buildah lets you build OCI- and Docker-compatible images from scratch, with or without an existing container image starting point.

Container images are the standard application delivery format in cloud-native environments, but even cloud-native companies mix workloads between cloud providers. The ideal container security solution should support all architectures—whether your infrastructure runs on private hardware, a shared data center, or a public cloud like Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform.

The base image, or golden image, is one of the most important for security purposes, because it is used as the starting point from which you create derivative images. Container security starts with finding trusted sources for base images. Confirm that the image comes from a known company or open source group, is hosted on a reputable registry, and that the source code for all components in the image are available.

Even when using trusted images, though, adding applications and making configuration changes will introduce new variables. When bringing in external content to build your apps, keep proactive vulnerability management in mind:

  • Use an image scanner, either built into the registry or separate, to scan all images on a regular cadence. Look for a scanner that scans based on specific languages, packages, and image layers.
  • Identify modified container images that break policies or documented best practices—known as container misconfigurations—to reduce the likelihood and impact of potential compromises.

Read a blog post about container image security

Anticipate and remediate vulnerabilities

Containers are popular because they make it easy to build, package, and promote an application or service, and all its dependencies, throughout its entire lifecycle and across different workflows and deployment targets. But there are still some challenges to container security. Containers can help you implement finer-grained workload-level security, but they also introduce new infrastructure components and unfamiliar attack surfaces. The right container security solution must help secure the cluster infrastructure and orchestrator as well as the containerized applications they run.

Static security policies and checklists don’t scale for containers in the enterprise:

  • The supply chain needs more security policy services.
  • Security teams need to balance the networking and governance needs of a containerized environment.
  • Tools used during the build, maintenance, and service stages need to have different permission policies.

An effective container security program seeks to remediate vulnerabilities in real-time and reduce the attack surface before images are deployed while retaining provenance details. By building security into the container pipeline and defending your infrastructure, you can make sure your containers are reliable, scalable, and trusted.

When gathering container images, ask:

  • Are the container images signed and from trusted sources?
  • Where did the image come from, and how can I rebuild it?
  • When was the last scan date for a given image?
  • Are the runtime and operating system layers up to date?
  • How quickly and how often will the container be updated?
  • Are security risks identified, and how will they be tracked?
Learn specific Kubernetes patterns for container deployment and orchestration

Once you’ve obtained your images, the next step is to manage both access to, and promotion of, all container images your team uses. That means protecting the images you download as well as the ones you build. Using a private registry will allow you to control access through role-based assignments while also helping you manage content by assigning relevant metadata to the container. This metadata will help you identify and track known vulnerabilities. A private container registry also gives you the power to automate and assign policies for the images you have stored, minimizing human errors that may introduce vulnerabilities into your container environment. Container registries with enterprise-grade security capabilities will also have built-in vulnerability scanners.

When deciding how to manage access, ask:

  • What role-based access controls can you use to manage container images?
  • Are there tagging abilities, to help sort images? Can you tag images as approved only for development, and then testing, and then production environments?
  • Does the registry offer visible metadata that allows you to track known vulnerabilities?
  • Can you use the registry to assign and automate policy (e.g. checking signatures, application code scans, etc.)?
E-book: Boost your hybrid cloud security

The last step of the pipeline is deployment. Once you’ve completed your builds, you need to manage them according to industry standards, such as those established by the Center for Internet Security (CIS) and the National Institute of Standards and Technology (NIST). The trick here is to understand how to automate policies to flag builds with security issues, especially as new vulnerabilities are found. While vulnerability scanning remains important, it is only part of a larger set of security initiatives used to protect your container environments.

Because patching containers is never as good of a solution as rebuilding them, integrating security testing should take into account policies that trigger automated rebuilds. Running on component analysis tools that can track and flag issues is the first part of this step. The second part is establishing tooling for automated, policy-based deployment.

When integrating security testing and automated deployment, ask:

  • Do any of my containers contain known vulnerabilities that I should fix before they're deployed into a production environment?
  • Are my deployments configured correctly? Are there overly privileged containers that don’t need the heightened privilege? Am I using a read-only root file system?
  • What’s my compliance posture with CIS Benchmarks and NIST SP 800-190?
  • Am I isolating any workloads deemed sensitive using built-in features such as network policies and namespaces?
  • Am I using built-in security and hardening features such as SELinux, AppArmor, and seccomp profiles?

Container security continues after testing and deployment, and extends to when the containerized applications are running. Aspects like threat detection, network security, and incident response become more relevant.

At runtime, applications can face unpredictable real-life threats where vulnerabilities and misconfigurations missed during the build time can be exploited. Runtime security should include looking for applications behaving in unexpected ways. Anomaly detection at runtime can identify privilege escalations, cryptomining, unexpected network flows, container escape, and other insecure behaviors.

Network segmentation is another concern for minimizing your attack surface. In Kubernetes, default network policies allow pods to communicate with other pods within a cluster. When you enforce zero trust policies, you can make sure a single compromised pod won’t lead to a compromise of all pods within that cluster.

Finally, incident response strategies can help teams respond appropriately to events. Responses can include sending events to a Security Information and Event Management (SIEM) system, alerting the application owner with detailed information and steps on which deployment needs remediation, and even killing and restarting pods automatically. Responses should follow the practice of rebuilding and redeploying problem containers, rather than patching running containers.

Whitepaper: Learn about a layered approach to container and Kubernetes security

Another layer of container security is the isolation provided by the container’s node/host operating system (OS). You need a host OS that provides maximum container isolation. This is a big part of what it means to defend your container deployments environment. The host OS in a containerized, Kubernetes environment is shared among containers, and is managed by a container runtime, which interacts with Kubernetes to create and manage containers (or pods of containers). 

The host OS should be isolated from the container, in order to prevent a single compromised container from compromising the host OS and all the other containers. To make your container platform resilient, use network namespaces to sequester applications and environments, and attach storage via secure mounts. Don't configure your container runtime to share host network namespace, IPC namespace, or UPC namespace. Choose a container-optimized host operating system that's prehardened, and use host vulnerability scanning.

An API management solution should include authentication and authorization, LDAP integration, end-point access controls, and rate limiting.

When deciding how to defend your container infrastructure, ask:

  • Which containers need to access one another? How will they discover each other?
  • How will you control access and management of shared resources (e.g. network and storage)?
  • How will you monitor container health?
  • How will you automatically scale application capacity to meet demand?
  • How will you manage host updates? Will all of your containers require updates at the same time?

Red Hat® OpenShift® includes Red Hat Enterprise Linux®. It automates the container application life cycle, integrates security into the container pipeline, and enables your transition from DevOps to a DevSecOps strategy. Our container catalog provides you with access to a large number of certified images, language runtimes, databases, and middleware that can run anywhere you run Red Hat Enterprise Linux. Images from Red Hat are always signed and verified to ensure origin and integrity.

We monitor our container images for newly discovered vulnerabilities (which includes a continually updated and publicly visible health index), as well as release security updates and container rebuilds that are pushed to our public registry. Red Hat Advanced Cluster Security for Kubernetes integrates with DevOps and security tools to help mitigate threats and enforce security policies that minimize operational risk to your applications.

Red Hat Service Interconnect allows containers to access and communicate with one another while minimizing the added risk to your organization’s security or the user’s data.

Red Hat’s security partners can extend and enhance our container security capabilities with certified integrations. Red Hat OpenShift has security built into the platform, which complements our security partner solutions, to help secure applications and containers throughout the DevOps life cycle.

Plus, all of this other stuff is pretty good, too:

  • Web-scale container orchestration and management
  • Rich web console with multi-user collaboration features
  • CLI & IDE interfaces
  • Integration with CI
  • Build automation & source-to-image
  • Deployment automation
  • Support for remote storage volumes
  • Simplified installation & administration
  • A large collection of supported programming languages, frameworks, & services
Learn more about integrating security guardrails with Red Hat® Advanced Cluster Security for Kubernetes
Hub

The official Red Hat blog

Get the latest information about our ecosystem of customers, partners, and communities.

All Red Hat product trials

Our no-cost product trials help you gain hands-on experience, prepare for a certification, or assess if a product is right for your organization.

Keep reading

What is kubernetes security?

Kubernetes, as a relatively new technology, has seen tremendous adoption in recent years, but security investment hasn’t always kept up.

Red Hat Insights data and application security

Red Hat® Insights analyzes platforms and applications to predict risk, recommend actions, and track costs to help enterprises manage hybrid cloud environments.

What is CI/CD security?

CI/CD security is used to safeguard code pipelines with automated checks and testing to prevent vulnerabilities in software delivery.

Security resources

Related articles