Introduction
GitOps is a term that has become very popular in the last few years and is easily on its way to becoming just as overloaded with myth and mystery as DevOps. Some definitions of GitOps describe it as a mechanism for delivering and maintaining the infrastructure on which Kubernetes applications run. Other definitions include the infrastructure focus, and augment it with the definition of a business application that has been created for the platform. In this series of articles, we will present the principles and practices of GitOps, explaining the why and how of the automated processes that aim to deliver secure, high-quality, microservice-based applications quickly and efficiently.
The objectives and benefits of a GitOps approach are easy to see. By storing the definition of the required infrastructure, and/or application configuration, in a Git repository, teams have everything required to deploy the application in a structured and secure storage. If the application needs to be redeployed to a new location, for reasons of disaster recovery or for resilience, then it is possible to do that with both ease and confidence. Additionally, the GitOps model will help organizations to progress applications through various testing phases on the way to production. This process necessitates the deployment of an application to different clusters, or different namespaces within a cluster, and the GitOps model ensures that this can be done completely, reliably, and with confidence.
There is a clear distinction between the two domains of infrastructure and applications, with different teams of an organization responsible for each area. Infrastructure configuration takes care of the definition and delivery of the Kubernetes platform, in terms of the compute nodes, operators, role-based access control, governance policies, and many other attributes that create a platform on which applications can run. Applications can be segmented into the source code components that are compiled to an executable within a container image and the Kubernetes resources that configure the container images within a specific environment.
The guiding principles of GitOps are simple to articulate:
- A single source of the truth shall be maintained in the Git repository.
- Automation will be used to apply any changes committed to the Git repository.
- Any manual changes to an environment will be reverted automatically.
There are many more core tenants to a GitOps approach underpinned by the cultural and technical principles of DevOps. So while there is probably no right and wrong answer to "What is GitOps,'' the answer probably lies somewhere within the automation that delivers value and a working practice that allows change to be recorded, propagated, and rewound as required within an organization.
In simple terms GitOps is the process of defining the single source of the truth for an application configuration in a Git repository. If teams wish to make changes to the configuration of an application, then they simply modify the files in the appropriate Git repository and sit back for the automation process to update the application. Extensions to this process can involve the use of external ticketing systems and pull requests in the Git repository to ensure that only approved and appropriate changes are propagated to production environments.
Source Code Management
There are many source code management systems available, but as the name suggests, Git is at the heart of a GitOps approach. Any source code management system that provides a Git type interface can be used, and for the remainder of this document, any reference to Git will assume GitHub, since that is what is used for development and testing of the content in this article.
The interaction of the source code management system with the GitOps model is based on the following criteria:
1: The ability of the source code management system to trigger external processes when specific actions are performed within it. For example, a commit of new source code to a specific branch may cause an application build to begin. The build process will start by cloning the latest version of the source code from the relevant branch. Alternatively a merge operation may trigger a build process to begin. These are examples of “push” triggered actions in which a change to the source code management system causes an external event. When such trigger events occur, a specific URL is sent a webhook payload of data about the event. The push model is shown in figure 1, with a ‘git push’ action from the user. Note that other Git actions could be used such as pull-request creation.
2: The ability of the automated deployment solution to monitor the source code management system to identify new commits. This will allow actions to be triggered in a “pull” mode in which an external entity takes a decision to perform an action after observing a change in the Git content. There is no payload content included within the pull model directly, and the external system must request any information that it needs.
Figure 2: External system watching the Git repository for a new commit
The pull and push based trigger operations will be used for different aspects of the GitOps model.
The automation technology behind GitOps
To facilitate the pull and push models above, teams require automated software solutions. The open source solutions that are included in a Red Hat OpenShift subscription, and that are fully supported by Red Hat, are described below:
Red Hat OpenShift Pipelines (Tekton)
Red Hat delivers a supported and integrated implementation of the Tekton open source project as OpenShift Pipelines. This provides a complete, continuous integration process capable of performing software builds, container image creation, container image management, testing operations, and security scanning with a variety of testing and validation solutions. OpenShift Pipelines operates by executing commands within container images that perform discrete steps of a pipeline process. Any command line utility that can be hosted within a container image can be used. Tekton Triggers can be used to respond to webhook requests from GitHub so that the Tekton pipeline can be executed as a result of an action in GitHub (or other source code management solution).
Pipeline assets are delivered as YAML files that are created as resources, within a specific namespace, on the OpenShift cluster. The upstream project name “Tekton” is often used to refer to OpenShift Pipelines.
Red Hat OpenShift GitOps (ArgoCD)
Red Hat delivers a supported and integrated implementation of the ArgoCD open source project as OpenShift GitOps. This provides a complete, continuous delivery process capable of monitoring GitHub repositories and ensuring that Kubernetes resources on the OpenShift platform are maintained in synchronization with the content in the GitHub repository. ArgoCD applications are used to monitor a specific set of files within a GitHub repository and to create the necessary Kubernetes resources. ArgoCD can be used to deliver to the cluster resources required for the application, and ArgoCD can also be used to deliver OpenShift Pipelines resources such as tasks.
Further information on the use of both solutions will be provided in future articles.
Git Repository Content
Different categories of content need to be stored in different Git repositories. This will facilitate a practical organizational structure, and it will also allow content to be managed with different roles based access control mechanisms depending on who is allowed to read and/or write content based on their role within the organization. In this example, the following list shows the overall structure and content of repositories to be used:
- Application source code - The content that will be compiled into an executable to be hosted within the production container image.
- GitOps Configuration - Resources that control the creation of the initial GitOps assets. This will include the definition of the locations of all other assets described below.
- Continuous Integration - The resources that create and manage the pipeline process for building the new container image from source code and a base container image.
- Continuous Delivery - The resources that describe how the application will be deployed to the various environments on the “route to live.” This example will show different directories within the repository for each environment; however, for an even greater degree of separation, an individual repository could be used for each environment.
What creates what?
There are a number of moving parts in this scenario, so it is important to understand which technology is responsible for creating which content.
Container image
The application container image is produced by combining the base container image with the built artifact, such as a JAR file. The JAR file (for example) is produced by building the application source code. The container build operation is performed by a Tekton process.
- Inputs: Application source code in a Github repository
- Base container image containing and required runtime application
- Output: Container image in an OpenShift image stream
- Operator: Tekton pipeline process
Tekton pipeline
The Tekton pipeline is a set of Kubernetes resources created in a specific namespace. An ArgoCD application is responsible for the creation and ongoing synchronization of the Tekton resources from a GitHub repository.
- Input: Tekton YAML files in a Github repository
- Output: Tekton resources in an OpenShift namespace
- Operator: ArgoCD application
Deployment to development
The new container image needs to be deployed to a development environment (namespace) on the OpenShift cluster. This involves the use of deployment, service, route, and others. YAML files are stored in a GitHub repository. An ArgoCD application is responsible for the creation and ongoing synchronization of application resources from a development directory of a GitHub repository.
- Inputs: Application YAML files in a Github repository
- Container image in an OpenShift image stream
- Output: Running instance of the application in a development namespace
- Operator: ArgoCD application
Deployment to QA
In the same manner as for the development environment, the new container image needs to be deployed to a QA environment (namespace) on the OpenShift cluster. An ArgoCD application is responsible for the creation and ongoing synchronization of application resources from a QA directory of a GitHub repository.
- Inputs: Application YAML files in a Github repository
- Container image in an OpenShift image stream
- Output: Running instance of the application in a QA namespace
- Operator: ArgoCD application
ArgoCD application instances - configuration application
Three ArgoCD application instances exist above, and more are likely to be required in a customer implementation involving more environments in which applications need to be tested on the way to production. To ensure that all ArgoCD applications are created as required, a configuration application can be used. This is often referred to as an ‘app of apps’ because it contains references to the other ArgoCD applications.
- Input: ArgoCD application definitions (including itself)
- Outputs: ArgoCD application for the configuration (controller)
- ArgoCD application for the CI process
- ArgoCD application for the Dev environment
- ArgoCD application for the QA environment
- Operator: ArgoCD application
GitOps configuration process
Figure 3 shows the above repositories and their relationships. ArgoCD resources, shown in green, are responsible for reacting to changes in GitHub repositories and creating content. The assets they produce are either Tekton resources, shown in blue (step 2 in figure 3), or Kubernetes resources, shown in red (step 4 in figure 3).
Figure 3: Git repository configuration for GitOps model
The application source code repository has a webhook configured that will notify the continuous integration build process when new content has been committed. This is indicated by step 1 on figure 3. The continuous integration build process is provided by Tekton, and the resources that are used to create the tasks and pipelines are held within the continuous integration repository. A mechanism is required to deliver updated tasks and pipelines, when they are modified and committed to the Git repository by the team. For this, an ArgoCD application is used, which will monitor the continuous integration repository and ensure that all resources are synchronized to the Kubernetes resources that deliver the Tekton build process. This monitoring is shown in step 2 of figure 3 in which the ArgoCD project is monitoring the Git repository (red line) and updating the Kubernetes resources (step 2, gray line).
The Git configuration repository is responsible for the creation and management of all ArgoCD applications that are used to deliver elements of the continuous integration and continuous delivery process. When creating the process, teams are required to manually create the initial ArgoCD project (the configuration application) shown in step 3 of figure 3. This project points at the GitOps configuration Git repository containing the definition of all ArgoCD applications. As each of the ArgoCD applications described above are created, they will begin the process of synchronizing the resources to which they refer.
Step 4 of figure 3 shows the introduction of the continuous delivery process. To deploy the application to an environment requires a set of YAML files, which typically include a deployment, service, route, config maps, secrets, persistent volume definitions, and others. These assets are stored in the continuous delivery repository within a directory structure that allows a base set of common resources to be defined (step 5), together with overlays of specific files for environmental variance. To deploy each set of environment-specific Kubernetes resources, it is necessary to create an ArgoCD application for each environment. For example, to deploy the application to the 01-dev environment, an ArgoCD application is created that refers to the /environment/01-dev directory within the continuous delivery Git repository. The use of the ArgoCD application ensures that any changes made to the content of the /environment/01-dev directory are faithfully applied to the Kubernetes environment. As an engineer working on the deployment of applications to each of the environments, all I have to do is make changes to the YAML files, commit them to the Git repository, and allow ArgoCD to apply those changes to the Kubernetes environment. If a new environment is to be introduced to the route-to-live, then the team performs the following tasks:
- Define the required YAML files within the environment directory of the continuous delivery Git repository
- Create a new ArgoCD application definition within the environment directory of the GitOps configuration Git repository. This application refers to the new directory created within the Continuous delivery repository
- Allow the configuration ArgoCD application indicated by step 3 of figure 3 to identify that the structure has changed which will create the new ArgoCD application created at stage 2 above
- Allow the new ArgoCD application for the new environment to deploy the application Kubernetes resources
Access to assets
The assets used in the development of this series of articles can be found in the GitHub repositories indicated in figure 4.
Repository |
Description |
Application source code for a simple Open Liberty based application |
|
Tekton continuous integration assets |
|
Deployment assets for each environment |
|
ArgoCD applications for the synchronization of the containerised application to each environment, the Tekton resources and the configuration ArgoCD application |
Figure 4 : GitHub repositories used in this article.
Further information on how to use the above assets and additional resources such as required secrets will be provided in further articles.
Summary
A developer of the application source code can make changes to code and be assured that their commits will be picked up and built by the automated Tekton process. An environment engineer can make changes to how the application will behave by updating and committing to Git the Kubernetes resource files. A deployment automation engineer can create new environments and define ArgoCD applications to reference them by adding new files to the configuration repository.
All assets relevant to the creation, build, and deployment of the business application are held securely within the Git repository where they are subject to access control, audit, and logging.
In subsequent articles, further exploration of the processes will take place. Specific areas of focus will be:
- The controlled and automated release of container images using tags on images and managed updates to continuous delivery assets.
- The management of the continuous delivery Kubernetes resources and the use of a base set of assets with overlays for each environment.
- The use of branches for the management of approval of content for specific environments.
- The use of test automation and image scanning for the new container image.
About the author
More like this
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit