In some environments, it’s mandatory to be able to certify (validate) the integrity and authenticity of the software being used. Validating the integrity in this case refers to ensuring that the software component has not been tampered with. Validating the authenticity means making sure that the software component not only is integral, but it is also exactly the component that the software supplier intended to ship for that specific release.

This problem has been solved previously within the Fedora/RHEL ecosystem with RPM packages. RPM packages carry an embedded signature from the software distributor that can be used to verify integrity and authenticity.

But, now with OpenShift being released as a set of container images, how can we get to the same level of guarantees?

To answer this question, we need to first understand how integrity and authenticity can be verified for a single image and then how the chain of trust can be established for all the components needed by OpenShift during the installation and upgrade processes.

Container Image Anatomy

A container image is composed of a set of file system layers, and an image config file. In a repository, a container image is described by an image manifest. Let’s describe these components starting with the image manifest.

An image manifest is a JSON document stored by a container registry, which describes the component of a container image. Here is an example:

   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
   "config": {
       "mediaType": "application/vnd.docker.container.image.v1+json",
       "size": 7023,
       "digest": "sha256:b5b2b2c507a0944348e0303114d8d93aaaa081732b86451d9bce1f432a537bc7"
   "layers": [
           "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
           "size": 32654,
           "digest": "sha256:e692418e4cbaf90ca69d05a66403747baa33ee08806650b51fab815ad7fc331f"
           "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
           "size": 16724,
           "digest": "sha256:3c3a4604a545cdc127456d94e421cd355bca5b528f4a9c1905b15da2eb4a4c6b"
           "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
           "size": 73109,
           "digest": "sha256:ec4b8955958665577945c89419d1af06b5f7636b4ac3da7f12184802ad867736"

In this example above, an image with three layers and a config file (in the manifest v2 format, there is always one config file) are present. For each component, the manifest describes the media type (the format in which the component is stored) and a digest, which can be used to verify that the downloaded component has not been tampered with.

The image config file contains several pieces of metadata related to the image. This resource is typically composed of a long JSON file. The following is a redacted example:

   "created": "2015-10-31T22:22:56.015925234Z",
   "author": "Alyssa P. Hacker <>",
   "architecture": "amd64",
   "os": "linux",
   "config": {
   "rootfs": {
     "diff_ids": [
     "type": "layers"

Relevant to this discussion is the rootfs section, where again as indicated is the list of the image layers. This time, though, the digest represents the sha256 hash of the uncompressed layers on disk.

The layers of an image, therefore, are referenced by the image manifest, which provides the digest of the compressed form that was used to store the layers in the registry (and can be used to retrieve the layers), and by the image config, which provides the digest of the uncompressed (on disk) format.

A container runtime can use this metadata to implement several validation and storage optimization policies (for example: layer reuse).

CRI-O, in particular, does not use any information from the image config file regarding uncompressed layers. It stores information on the uncompressed layers within internal files

Image Integrity Validation at Pull Time

In order to validate an image at pull time (that is, when it is downloaded from a registry), we need to know the digest of the image manifest. Assuming that we have obtained a trusted digest, and we know the image location, validation comprises of the following steps:

  1. Download the manifest and calculate its digest.
  2. Compare the calculated digest with the known digest. If there is a match, the image manifest can be trusted.
  3. Download the image config and calculate its digest.
  4. Compare the calculated digest with the image config digest present by the manifest. If there is a match, the image config file can be trusted.
  5. For each layer, download the layer and calculate its digest.
  6. If the calculated digest matches the digest present in the manifest, the layer can be trusted.

If each of these steps passes, the integrity of the image as a whole has been validated.

It is important to note that in order to validate the integrity of an image, we do not need to trust the registry from which we are pulling the image.

CRI-O, which is the container runtime used by OpenShift, always executes steps 3 to 6. Steps 1 and 2 are executed only when the manifest digest is known. This occurs in two situations:

  1. The image is pulled with the @sha notation (for example,
  2. The image signature is also being validated (see below).

Image Integrity Validation on Disk

While it is theoretically possible to build a function to validate the uncompressed layer content on disk, this is not currently supported by CRI-O (see also this bz) or any other container runtime I am aware of. Being able to verify the content on disk would, in principle, protect against an attack where a malicious user gains access to a machine in which a trusted image has been pulled and changes some of the files of the uncompressed layers.

While this scenario is possible, if we admit that a malicious user can gain access to a node as root (this is the level of permission required to modify container images files), then such a hacker can also change the container runtime binary/configuration in such a way to disable the on disk integrity check, nullifying the benefit of this control.

This is certainly the case for Kubernetes/OpenShift nodes, where, if a hacker can gain access to those nodes, they can compromise the entire node (not just an image) or even the entire cluster (when the node is a master node).

Image Authenticity Validation

Validating the authenticity of an image refers to being able to certify that the image was published by a trusted source (in our case, Red Hat). This typically involves digitally signing an image.

An image digital signature is a PGP-signed JSON document with the following format:

   "critical": {
       "type": "atomic container signature",
       "image": {
           "docker-manifest-digest": "sha256:817a12c32a39bbe394944ba49de563e085f1d3c5266eb8e9723256bc4448680e"
       "identity": {
           "docker-reference": ""
   "optional": {
       "creator": "some software package v1.0.1-35",
       "timestamp": 1483228800,

Signatures can be stored in a web server with a specific layout. This server is referred to as sigstore.

Relevant to this discussion is the fact that this digitally signed document brings together the image identity (or pull spec) and the digest of the manifest.

Essentially, by knowing the pull spec of an image and a sigstore with that image signature, we can validate authenticity and integrity of an image by:

  1. Downloading the signature. The signature can be found at a well-known URI in the sigstore, which is based on the image name and manifest digest.
  2. Verifying the signature based on a set of trusted PGP public keys (that is, trusted identities).
  3. Unpacking the signature and extract the digest of the manifest and the image name. Both have to match, the former with the calculated manifest digest, the latter with the image name requested by the user.
  4. Following the steps described above for integrity validation at pull time.

This blog post describes in detail how to configure image signatures for podman/crio.

Image Mirroring

Image mirroring is a podman feature that allows to decouple the logical name/identity of an image from its physical location. So, one can configure podman so that all images with a given name pattern should be translated to an actual different pull spec when they need to be pulled.

Because of this separation between logical name and physical location, all the operations that act on the logical name do not need to change if the image is moved to a different repository (that is, if it’s mirrored). This includes a client asking to pull the image by logical name.

This is what allows OpenShift to be run in air-gapped environments with a mirrored registry without having to rename all the images that comprise an OpenShift release.

Signature verification is considered to operate on the logical name of the image. This implies that by having a set of signed images that need to be mirrored, one could mirror the images to a new repository, copy the signatures to an internal sigstore, and configure mirroring along with signature validation in podman, and everything will keep working as expected. In other words, there is no need to re-sign all the images when changing the location of the images.

Trusting an OpenShift Installation

To be able to validate the integrity and authenticity of an OpenShift installation, there needs to be a process to validate all of its components, including the binaries and container images.

Trusting OpenShift Binaries Needed for the Installation

The binaries needed for an OpenShift installation are: oc, openshift-install and one of the RHCOS base image distributions.

The oc and openshift-install can be downloaded and their signature can be verified (this KB explains the process).

The same also applies for the RHCOS image. For example, using this resource, you can find the RHCOS images and the digest for the latest release.

Trusting an Online OpenShift installation

In OpenShift 4.x there exists a release image for each release (including minor releases). This is a special image which represents an Openshift release and contains a manifest of all the images that comprise a release (referenced with the @sha notation).

For an online (connected) OpenShift installation, the chain of trust is established as follows:

  1. openshift-install archive is trusted via the sha256sum signature
  2. openshift-install binary is transitively trusted
  3. Release image digest is hardcoded within the openshift-install binary; therefore it is transitively trusted.
  4. Image digests of the images that comprise the release are hardcoded in the release image and are therefore transitively trusted.
  5. Since images are referenced via a digest, their integrity is verified at pull time.
  6. Since the digest is trusted, their authenticity can be verified and ultimately traced back to the signer of the openshift-install archive.

As you can see from this process, trust is established transitively, without the need of image signatures.

Trusting a Disconnected Installation

In the case of a disconnected installation, we assume that the images have been correctly mirrored. As explained previously, we don’t really need to place any trust on the mirroring process, or the registry mirror. So, it doesn’t matter how the mirroring occurs (although we recommend you to follow the instructions found in the docs) or what registry product is used. If the images are not the correct, the installation process will simply fail.

In a disconnected install, trust is established as follows:

  1. oc archive is signed
  2. oc binary is transitively trusted
  3. openshift-install binary is obtained from the trusted oc binary passing a trusted release image pull spec in digest format from the mirrored repo. Therefore, it is transitively trusted. For each release, a manifest of the release is published (for example, at this location) and signed. From this manifest, one can obtain a trusted digest of a release image.
  4. Image mirroring is configured in the install-config manifest, and from here, the installation proceeds as for a connected install (step 3) and trust is established the same manner.

Notice that at present, the documentation suggests to pass a release image by tag. Using this method would break the chain of trust. A fix to the documentation is in flight.

Trusting an OpenShift Update

The issue of trust is valid also at update time, when again, there is a need to ensure that the new binaries that are downloaded to update the systems are integral and authentic.

Update information for OCP 4.x is available at a well-known endpoint called the “Cincinnati” service. It provides information on the release images for each release and the available update paths (that is, if it’s technically supported to upgrade from one given release to another). This endpoint is signed with a Red Hat certificate and therefore trusted. You can query this API with the following command:

curl -H 'Accept: application/json' -L '' | jq .

Trusting an Online Update

When you perform an online upgrade, the options presented to you in the UI originate from the Cincinnati service. Based on the target release selected, a release image is also selected. Trust is established as follows:

  1. The Cluster Version Operator (CVO) pulls the selected release image verifying its signature.
  2. From here, the CVO pulls all other images that comprise the release and trust is established the same way as for the online installation from point number 5.

Image signature is verified based on the configuration of a ConfigMap called release-verification within the openshift-config-managed namespace. This ConfigMap contains the public PGP key used to verify the signature and the sigstore address.

Image signature verification in this case protects OpenShift from a compromise of the Cincinnati service.

Notice that currently, the sigstore layout is not compliant (ART-1545) and only the release image is signed and not all of the images that comprise a release.

Trusting a Disconnected Update

For a disconnected upgrade, we assume that images have been correctly mirrored. Again, how is not relevant to this discussion, because we don’t need to trust that portion of the process.

We also assume that by consulting the Cincinnati service, the target release has been selected ensuring that the upgrade path is allowed. So, in this case, we know the pull spec of a release image by digest.

Trust is established as follows:

  1. The upgrade is triggered by issuing the following command: oc adm upgrade --from-image=<new-release-image> --force.
    The --force flag will disable the image signature check. This is necessary because in a disconnected environment, the sigstore is not accessible. Trust is still established because we trust the release image and we express the pull spec by digest.
  2. From this point, trust is established in the same manner for the connected upgrade as indicated by bullet 1 (except that the signature is not verified).

Currently, OpenShift tooling does not support mirroring the sigstore in an air-gapped environment. Support for signature validation in air-gapped environments will be added in OCP 4.4.


In this article, we described how the OpenShift software distribution process can be trusted. Keep in mind, this process may change (mature) in the future release of OpenShift.

In addition, it is important to note that we covered only the certification of the images that strictly comprise an OpenShift release and not, for example, how to certify the images of the operators available from Operator Hub or the images of the OpenShift samples. This was chosen to limit the scope of the article, but it is the hope that the process was explained enough for an individual to be able to set up a certifiable process for those and other use cases.

About the author

Raffaele is a full-stack enterprise architect with 20+ years of experience. Raffaele started his career in Italy as a Java Architect then gradually moved to Integration Architect and then Enterprise Architect. Later he moved to the United States to eventually become an OpenShift Architect for Red Hat consulting services, acquiring, in the process, knowledge of the infrastructure side of IT.

Read full bio