Introduction
GitOps is a term that has become very popular in the last few years and is easily on its way to becoming just as overloaded with myth and mystery as DevOps. In this series of articles, we will present the principles and practices of GitOps, explaining the why and how of the automated processes that aim to deliver secure, high-quality, microservice-based applications quickly and efficiently. In part 1 of the series, we introduced the main concepts of GitOps, together with the open source automation technology Tekton and ArgoCD. These tools operate on the Red Hat OpenShift platform to deliver a cloud-native continuous integration and continuous delivery process. The first article also gave an indicative structure for Git repository and ArgoCD applications that can create a secure and audited process for the delivery to production. This article will continue the series by explaining how container images produced during the continuous integration phase can be successfully managed and used within the continuous delivery phase.
Container image creation
A fundamental element of the GitOps model for microservice-based applications is the creation of new container images. Such container images will have a location where they are stored and a unique tag to identify the specific version of the image. Container images will typically also have a number of labels applied that describe important characteristics of the image, such as the maintainer, version identifiers, and the organization that produced the image.
There may be a temptation to use ‘latest’ as the tag for images, as this can be easy to consume when deploying the image to an environment. Each deployment process simply uses the image name, followed by ‘:latest’ to get the most recent version. However, the use of ‘latest’ removes an opportunity for immediate differentiation of images deployed in specific environments, and it creates a requirement for a re-tagging process later in the pipeline process when a unique identifier is required, such as when deploying to production. Using a unique tag from the point at which the image is created is a better approach, but this can lead to a complication of the pipeline process when it gets to the initial deployment phase. The pipeline needs to deploy the container to the development environment using an image tag that is not known at the time that the pipeline process begins. This presents a requirement to dynamically patch the deployment YAMLfile to use the new tag.
Figure 1 shows a pipeline in which the following tasks are performed:
- Source code is built to produce a jar file.
- The jar file is then injected into a clean base image containing the runtime software required for the application.
- The new container image is pushed to an image registry with a unique tag created during the pipeline execution.
- The container is deployed so that initial testing can take place.
Figure 1: Container creation pipeline
Deployment.yaml requirements
The deployment YAML file (which I will assume is called deployment.yaml for a simple example) is shown below in figure 2. The property spec.template.spec.image in the YAML file below identifies the container image to be used in the deployment. In the example in figure 2, the image is being taken from the Red Hat OpenShift image stream. At the end of this article, a process is described for the permanent storage of images for deployments to production environments. The image specification line has a tag of <tag> which is to be replaced with the tag created during the container image creation pipeline.
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: myapp
app.kubernetes.io/part-of: liberty
name: myapp
namespace: myapp-development
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: image-registry.openshift-image-registry.svc:5000/myapp-ci/myapp-runtime:<tag>
imagePullPolicy: Always
ports:
- containerPort: 9080
name: http
protocol: TCP
Figure 2: Deployment.yaml file showing image tag requirement
Kustomize for customization of deployments
The open source utility Kustomize has a variety of uses for the modification of Kubernetes YAML files. It can be used either before a deployment to generate a new file to be applied to the cluster, or it can be used directly by the OC (or kubectl) command to apply dynamically patched resources to the cluster. Central to the use of Kustomize is the management of a kustomization.yaml file that refers to the Kubernetes resources to be deployed to the cluster. In addition to applying a set of files to the cluster with a single command, the kustomization.yaml file can contain text replacement specifications to be applied to the YAML files to which it refers.
An example of a basic kustomization.yaml file is shown in figure 3 below.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- myapp-deployment.yaml
- myapp-service.yaml
- myapp-route.yaml
- 01-rolebindings/argocd-admin.yaml
- 01-rolebindings/ci-pipeline-role.yaml
Figure 3: Simple Kustomize file
The file in figure 3 will apply the four files in the current directory and the two files in the subdirectory 01-rolebindings. No text substitution is used in this file.
The command below can be used to process the Kustomize file and apply the six resources to the cluster. The command assumes that the file kustomization.yaml is in the current directory.
oc apply -k .
Kustomize files are often used to refer to a base set of files to which text replacements are then performed. An example of this is shown in figure 4 below, in which a base set of common files are referenced.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
Figure 4: Kustomize file referencing a base directory.
Additional specific files can be added as shown in figure 3 above, to supplement the base set of resources.
A further example Kustomize file, shown in figure 5, contains a reference to a base set of files, together with text replacement specifications:
- The namespace declaration is replaced with the value ‘myapp-pre-prod’
- The number of replicas is changed to 4
The changes are to be applied to a deployment resource called ‘myapp’.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- patch: |-
- op: replace
path: /metadata/namespace
value: myapp-pre-prod
- patch: |-
- op: replace
path: /spec/replicas
value: 4
target:
kind: Deployment
name: myapp
- ../base
Figure 5: Kustomize file with patch specifications
Kustomize has many capabilities for the creative modification of Kubernetes resources, and this description has barely scratched the surface of what it can do. Readers are encouraged to take a look at the official documentation for Kustomize here for further details.
Updating the image tag in the deployment file
Kustomize is to be used to ensure that the new image tag, produced as part of the build process, is used when the application is deployed to the development environment. This process does not modify the deployment.yaml file directly; instead, a new section is added to the Kustomize file so that when the Kustomize file is processed during the deployment action, the deployment.yaml file is updated in a just-in-time manner. The command to be executed to modify the Kustomize file is:
kustomize edit set image <new-image-tag>
This command will add lines similar to those shown in figure 6 to the Kustomize file:
images:
- name: image-registry.openshift-image-registry.svc:5000/myapp-ci/myapp-runtime
newTag: <new-image-tag>
Figure 6: Example of the content to be added to the Kustomize file
For example using the command:
kustomize edit set image image-registry.openshift-image-registry.svc:5000/myapp-ci/myapp-runtime:abcd-1234ef
Will result in the addition or modification to the Kustomize file as shown in figure 7:
images:
- name: image-registry.openshift-image-registry.svc:5000/myapp-ci/myapp-runtime
newTag: abcd-1234ef
Figure 7: Example of content to be added to the Kustomize file
To test the Kustomize process, and to see the impact of the above change on the deployment.yaml file, the command below can be used:
Kustomize build
This command will display the result of processing the Kustomize file and all referenced YAML files will be displayed separated with each distinct file separated by a row of ‘-’ characters.
Automation of the kustomization file change
To automate the process as part of a pipeline operation, a Tekton task can be used to execute the Kustomize command to add the image tag to the kustomization.yaml file. An example of a Tekton task that can perform this operation is shown in figure 8:
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: configure-kustomization-file
namespace: myapp-ci
spec:
params:
- name: image-url
type: string
steps:
- name: configure-kustomization-file
script: >-
#!/usr/bin/env bash
set +x
ls -al
kustomize edit set image $(params.image-url)
ls -al
cat kustomization.yaml
image: quay.io/marrober/kustomize:latest
workingDir: /files/myapp-cd/env/overlays/01-dev
workspaces:
- name: files
mountPath: /files
Figure 8: Tekton task for execution of ‘kustomize edit set image’ command
The container image specified in the step configure-kustomization-file in figure 8 (quay.io/marrober/kustomize:latest) is stored in the author’s quay.io public registry and is available should anyone wish to use it. The docker file that was used to produce this image is located here. Additional tools added to the container image are explained in a subsequent article that provides detail of the resource scanning process.
The image tag to be used in the ‘kustomize edit set image’ command is supplied to the task as a parameter by the pipeline process. This property is generated by a prior task (create-runtime-image) that is responsible for the creation of the container image in the OpenShift image stream. The tag is generated, tagged to the image, and then set as a result property of the task. This allows the Tekton pipeline to pass the tag as an input property to the configure-kustomization-file task shown in figure 8. The GitHub repository containing the task and pipeline resources described in this article are located here. The overall structure of the different GitHub repositories that make up the GitOps process are fully described in the first article in this series located here. The task in figure 8 shows that the command is executed within the context of a working directory which refers to the cloned assets from the myapp-cd GitHub repository. Specifically, the env/overlays/01-dev directory is identified as the location of the deployment resources responsible for deploying the application in the development environment. Parameters could be used to make this more variable such that teams could supply specific repository and directory information as part of the pipeline run resource.
The kustomization.yaml file within the above mentioned directory will have the correct tag added in the format shown in figure 7.
The updated kustomization.yaml file can be used directly to deploy the application using the command:
oc apply -k <directory-containing-the-kustimization-file>
For example:
oc apply -k env/overlays/01-dev
More appropriately to follow a GitOps model, the file could be committed to GitHub such that the change can be identified by an ArgoCD application and deployed by the ArgoCD synchronization process. This process is described in detail in a subsequent article.
Accessing the container image
The container image in this example is being taken from the Red Hat OpenShift image stream that exists in a project called myapp-ci. The namespace for the deployment is myapp-development, so clearly the namespace in which the deployment is being performed is not the same as the namespace containing the image stream. The myapp-ci namespace is used to contain the continuous integration pipeline Tekton resources. Splitting the resources across namespaces in this manner provides an opportunity for teams to implement strict role-based access control, for example, to restrict developers from making changes to pipelines but allowing them to create assets in the myapp-development namespace. The exact role-based access control implementation requires some considerable thought based on the requirements of each team. Splitting content across namespaces, and across separate GitHub repositories, will make any regime easier to implement.
To allow the myapp-development namespace to pull images from the myapp-ci namespace, a specific role binding is needed. An example of the role binding is shown in figure 9 below.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: myapp-development
namespace: myapp-ci
subjects:
- kind: ServiceAccount
name: default
namespace: myapp-development
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: 'system:image-puller'
Figure 9: Role binding for access to container image in CI namespace
The above role binding can be read as:
Create a new role binding within the myapp-ci namespace, called myapp-development that grants the cluster role of system:image-puller to the default service account within the myapp-development namespace.
Or described another way:
The default service account in the myapp-development namespace has the permission image-puller for the myapp-ci namespace.
Traceability and image tags
For traceability, the example presented here uses a tag made up from a part of the commit ID of the source code and part of the pipeline run identifier. This allows users to trace the container image to the source code from which the executable was produced and it allows users to examine the pipeline process run that produced the container image. In truth, this is not essential for traceability as the container image can also have labels applied during the build process that contain the full commit ID and the pipeline run identifier, if desired. Since unique tags are a good idea, and simply using ‘latest’ is frowned upon, the tag used here serves a useful purpose. The process of creating the image tag is included in the step generate-tag-push-to-ocp within the task create-runtime-image.
Push the image to Quay
To complete the image management process the image needs to be copied from the OpenShift image stream in the CI namespace to a permanent storage solution such as Red Hat Quay. Teams can run a private instance of Quay on a cloud-based or on-premise based OpenShift cluster, or they can take advantage of a hosted instance of Quay at quay.io. Other image registries exist, of course, and for the purposes of this article, Quay will be assumed.
As part of the pipeline process, and probably after the image has passed a vulnerability scan by Red Hat Advanced Cluster Security for Kubernetes, it is possible to push the image to Quay. A number of commands are available for the movement of the image from the OpenShift image stream to quay.io, and two of them are described below.
Buildah
To use the Buildah command to push the image to Quay the following steps will be performed:
- ‘buildah pull’ command to pull the image to a temporary local registry.
- ‘buildah tag’ command to create a tag that identifies the location on Quay.
- ’buildah push’ command to push the image to Quay.
The Buildah command is required to authenticate as a user, or robot-user, in Quay to be able to push the image to the registry. The Quay registry can generate credentials files in a Kubernetes secret format such that the credentials can be stored in the CI namespace to be consumed using the command format shown in figure 10 below:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: push-image-to-quay
spec:
< parameters are skipped for readability >
steps:
- name: push-image-to-quay
command:
- buildah
- push
- '--storage-driver=$(params.STORAGE_DRIVER)'
- '--authfile'
- /etc/secret-volume/.dockerconfigjson
- '--root'
- '/files/buildah-containers'
- quay.io/$(params.quay-io-account)/$(params.quay-io-repository):$(params.quay-io-image-tag-name)
image: registry.redhat.io/rhel8/buildah
resources:
requests:
memory: 2Gi
cpu: '1'
limits:
memory: 4Gi
cpu: '2'
volumeMounts:
- name: quay-auth-secret-vol
mountPath: /etc/secret-volume
readOnly: true
volumes:
- name: quay-auth-secret-vol
secret:
secretName: quay-auth-secret
Figure 10: Buildah push Tekton step
The above step shows that a volume is mounted to the pod running the task from the content of the secret called quay-auth-secret. The volume mounted to the pod is given the name quay-auth-secret-vol. The volume is mounted to a path of /etc/secret-volume so any keys within the secret’s data section will appear as files within that location. When the secret is examined using the command below the data block is shown to hold content within the key called .dockerconfigjson.
oc get secret/quay-auth-secret -o yaml
apiVersion: v1
data:
.dockerconfigjson: <secret-data . . . . . . . >
kind: Secret
metadata:
name: quay-auth-secret
The Buildah command has a parameter of:
--authfile=/etc/secret-volume/.dockerconfigjson
The above parameter will ensure that the Buildah command operates with the authentication token stored in the secret.
Skopeo
An alternative to the Buildah process is to use Skopeo, which can copy images from one registry to another without performing the pull - retag - push process. In a similar manner to the authorization file used by Buildah, Skopeo uses an authfile option for both the source and the destination registries in a single command. Further information on the use of Skopeo copy can be found here: github.com/containers/skopeo/blob/main/docs/skopeo-copy.1.md
Summary
A GitOps process requires the creation of container images with unique tags. Such tags can be stored in a structured manner using Kustomize files that allow for a managed update process as part of the pipeline. Container images need to be carefully managed, and while accessing them from the OpenShift image stream is fine for rapid development activities, it is sensible to store images in an enterprise image registry for longevity and for deployments to production environments.
About the author
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit