The format of container images is at the center of industry attention because it is so important to the adoption of containers. With the advent of the Open Container Initiative (OCI), it seems appropriate to compare container images to network protocols. Before TCP/IP became the defacto standard network protocol stack, each vendor was left to devise their own. Some leveraged IPX/SPX, while others standardized on AppleTalk. This made it difficult to create robust tooling. Much like network protocols, standardizing the bit level format of a container image, allows the industry to focus on higher level business problems, and more importantly, their respective solutions.
The adoption of TCP/IP allowed the industry to move beyond analyzing just the routing and session headers (stateful firewalling), to analyzing the data inside the packets. Initially, this allowed the construction and sale of standardized routers and firewalls. Early routers and firewalls were able to make ingress/egress decisions about network traffic based on source, destination, and protocol type, but eventually culminated in the creation of application firewalls (Layer 7). These application firewalls can make sophisticated decisions about a REST api, even preventing traffic if credit card numbers are embedded in the http payload.
This movement eventually led to an area referred to as DPI or “Deep Packet Inspection”. The primary mission of DPI is to introspect network packets, and examine them for malicious intent - viruses, spam, or even unintended things related to compliance or policy violations.
As we move into the world of application containers, there is a similar need emerging - the requirement to look beyond the container image format and into the container image itself. We can solve real business problems by preventing untrusted or compromised container images from being started or utilized to build an application. Enter DCI - Deep Container Inspection (...more on this in a few paragraphs).
Current Trust Model
Currently, there are two major types of actors in container image distribution: registry servers and container hosts. Registry servers distribute images; container hosts cache and run images. Trust is determined at the point where an image is transferred from a registry server to a container host. The distribution network is a standard client/server model and is typically done over https to provide transport security over an untrusted network.
Distributing container images, with this methodology, relies heavily on this circle of trust. In this model, each actor in the software supply chain determines if they trust the upstream supplier. Using a registry server, which provides technical controls, such as Satellite 6.1 with the Docker Plugin, administrators can make decisions about which container image content they will let into their network.
There are a couple of nuances that might not be noticed with the circle of trust model. First, is container ageing. You might have trusted the image when it was first produced, but over time trust becomes stale. As the image ages it will become susceptible to more and more security vulnerabilities. A recent report cited 30% of the images on DockerHub container high priority security vulnerabilities.
Another challenge with ageing is data. There may be configuration, database, or text data that has become stale. Starting an application and using this data may cause a lot of problems with your application and infrastructure.
Another nuance that might go unnoticed is that you must trust the upstream actor distributing the image and you must also trust the image content they are delivering. The Notary and The Update Framework (TUF) are two projects doing work to verify the image content, so that it may be distributed by non-trusted actors similar to how RPMs and ISOs are distributed on mirror sites. This makes the system more flexible, but does not address trust of the actor that created the container image.
Deep Container Inspection (DCI)
The current trust model doesn’t address the engineering methodology by which the container image was built. It’s time for a new approach - the goal of deep container inspection is to go further than the current trust models. Verifying where it came from is part of the battle, but verifying what is inside the container image allows you to make business decisions similar to deep packet inspection. Let’s walk through a couple of use cases.
Use Case 1: Inspect Policy in the User Space
Since container images provide a simplified method for packing up and shipping the user space around, developers and administrators need to understand what programs and configuration files have been embedded inside the user space.
OpenSCAP is becoming a popular and robust tool for analyzing the user space for security vulnerabilities or configuration compliance and it makes sense to leverage this for container images. Red Hat publishes Open Vulnerability Assessment Language (OVAL) vulnerability data and is actively working on open source tools to scan container images for Common Vulnerability Exposure (CVE) identifiers. Current work uses the RPM database embedded in the container image, at rest or running, to compare against the public OVAL data. This allows an end user to determine whether there are applicable CVE vulnerabilities.
OpenSCAP can also be used by an end user to verify policies implemented in their customized core builds. This could let an administrator know if end users are building container images that are not in compliance with corporate policies. For example, administrators could validate if a container image places its configuration files in a place that matches the operations team’s policies.
The ManageIQ community, which is upstream to Red Hat CloudForms, is working on bringing the full capabilities of Smart State Analysis to container images. Since a Smart State Analysis saves metadata about all packages and configuration files to the Virtual Management Database (VMDB), administrators will be able to compare the changes that have been made to a container image and even perform sophisticated drift analysis between versions of a container or even other containers.
Another goal is to allow the container image to be scanned before it is even downloaded from a registry server (and especially before it is instantiated). Red Hat is working in the ManageIQ community to be able to remotely scan images before downloading them.
Use Case 2: Inspect Policy in the Metadata
The metadata in the container image is another interesting area for developers and administrators. This information includes things like Architecture, Build Host, Docker Version, and even arbitrary key-value information embedded in labels.
A developer or administrator may want to understand what privilege is necessary to give to the container when pulling it and running it. By analysing the Atomic Run Label, the end user can quickly determine if the the container needs to run with the --privileged flag and make a decision if it is safe to run (or not).
Cluster administrators may want to verify the architecture of a container image before assigning it to a cluster. For example, if the metadata indicates that the container image contains a user space that is “Architecture: x86_64”, it would be desirable to assure that it does not get assigned to an ARM cluster.
Company policy may require that container images are all built on a certain host. For example, it may desirable to verify that Red Hat did indeed build the image by checking the tag: “Build_Host: rcm-img-docker01.build.eng.bos.redhat.com”
Finally, an administrator may want to verify that the “DockerVersion: 1.6.2” metadata of the image to guarantee it will run on the production container infrastructure. One could also imagine that it would be useful to list syscall table information in the metadata when building the image so that later, a cluster administrator can verify that the kernel on the container host can support the user space in the container image.
Use Case 3: Policy Decisions
Once developers and administrators begin to use data in the container image and the container image metadata to make manual decisions, the next step in DCI is to make automated decisions based on codified policy.
Once the ManageIQ community adds support for scanning images remotely, the ability will be in place to do Smart State Analysis and make policy decisions based on any of the container image data or associated metadata.
This will open up a new world of governance, control, and compliance to the administrators - imagine the ability to block the download of these images to a local registry server, the ability to prevent unwanted workloads from running, or even to automatically stop workloads that do not meet local policy and governance requirements.
Where Will This Go?
DCI could eventually extend to include policy-aware container infrastructure, kicking off CI/CD rebuilds after meta data, configuration, or data changes in upstream container images. For example, container images could be automatically rebuilt when an upstream image was detected to have the Heartbleed vulnerability. And, just as DPI opened the door for advanced quality of service (QoS) based on network transport implementations like Multiprotocol Label Switching (MPLS), so too will DCI open the door for advanced QoS in the container world. This will also be the subject of a subsequent blog post.
Conclusion
With the advent of Deep Container Inspection (DCI), a new level of analysis will be possible. The trust model will be much more sophisticated. Even though you may trust the upstream providers of your container images, with DCI, you will be able to verify the trust of container images even after local administrators or developers have made modifications.
Thoughts and/or questions? As always, I encourage you to reach out using the comments section (below).
About the author
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit