Skip to main content

An illustrated guide to GitOps

Understanding the basic principles driving GitOps offers Enterprise Architects a new way of working in the modern enterprise.
Image
Any large enough monolith can break down into a microservice.

Photo by timJ on Unsplash

GitOps is an automated approach to system and infrastructure administration in which the capabilities of Continuous Integration/Continuous Deployment (CI/CD), and Git-based source code control management are combined in order to create and subsequently manage enterprise systems quickly and accurately.

Whereas in the past, GitOps was more the concern of developers, today, the practice has matured to a point where it applies to the work of Enterprise Architects. The highly automated nature of digital infrastructure makes incorporating a GitOps approach into enterprise architecture attractive, but there's no magic bullet. You need to understand some concepts and practices before you can begin. Providing such an overview is the purpose of this article.

The place to start is with this essential principle: According to the GitOps way of thinking, everything is code. It doesn't matter if it's applications, networking, computing resources, or storage. If the asset is virtual, it's represented as code, and that code can be controlled through automation programming. The term often used to refer to this is "Infrastructure as Code" (IaC). It's at the root of the GitOps movement. The implications are worth examining.

Infrastructure as Code changes everything

In the good old days, before the internet proliferated into every crevice of business activity, the world of software development was very physical and very segmented. Applications ran directly on dedicated computers connected by wires that ran to and through all sorts of physical networking devices. Behind every one of these physical computing and networking devices were human beings who attended to their wellbeing.​

That world was very segmented: Developers wrote the code, system admins took care of servers, network admins created connections that tied the computers together. Deployment engineers ensured that the latest production version of the developer's code made it to its intended target.

Everybody had a particular job to do within the boundaries of their job description. Developers didn't mess with the system admins' machines, and system admins didn't mess with the developers' code. It was a very nice, tidy world in which things rolled along at a somewhat sluggish but acceptable pace.

And then virtualization came along.​

In what seemed like nothing more than a blink of an eye, the physical computer was replaced by the virtual machine, and networking hardware gave way to software-defined networking. Everything became code. You no longer made a call down to the systems department to request a computer to do your work. Instead, you went to a service dashboard in your web browser and declared the configuration of the virtual machine you wanted. Then you clicked a button and waited a few minutes for your machine to spin up and wire up.

This worked because the computing asset had gone from being a physical piece of machinery to a virtual device emulated by software. Behind that software was code written by a programmer, and that code was stored in a source code repository. Thus, source code repositories went from being a resource to keep the code for application software to a source of truth for all the virtual assets the enterprise needed for its infrastructure. This put the source code repository front and center in the modern world of DevOps.

Moving from DevOps to GitOps

The ability to automate machine provisioning in conjunction with deploying software via Continuous Integration/Continuous Delivery systems such as Jenkins and TeamCity played a critical role in the emergence of the DevOps movement. (See Figure 1, below)

Image
gitops controls repository
Figure 1: GitOps combines intelligent source control management under Git with Continuous Integration/Continuous Deployment.

 

Under DevOps, manual configuration of systems by humans gave way to automated deployment by machine intelligence. However, during the early years of DevOps, the purpose of online git repositories was still to store and version source code. Automated deployment was in the realm of CI/CD systems. But that was about to change.

[ For more on GitOps, you might also enjoy: The present and future of CI/CD with GitOps on Red Hat OpenShift ]

Over time the online source control management services have grown to be quite powerful. They've taken over a lot of the functionality that was previously exclusive to a dedicated CI/CD system. In fact, repository services that use git have evolved to build CI/CD capabilities directly into their platforms. (See Figure 2 below.)

Image
GitOps and CICD controls
Figure 2: Modern Git repository platforms have incorporated CI/CD capabilities into their feature set.

The essence of GitOps

Combining intelligent source control with automated CI/CD tooling is basically what GitOps is about.

Under GitOps, you check some code into a Git repository, and all sorts of bells and whistles go off to get your code to a relevant target automatically. For example, if your code is a new feature for an existing application, it ends up in the application. If your code declares an update to a network policy, it's propagated into the networking infrastructure. (See Figure 3 below.)

Image
GitOps controls IaC
Figure 3: GitOps controls the storage, versioning, and propagation of all assets "as code."

Under GitOps, the code defines everything your system requires to work—everything from building the application to creating and deploying the virtual machines or containers your system needs to run the application. GitOps takes care of it all.

Applying the principles of GitOps to enterprise architecture​

GitOps has caught on with the cloud providers. Also, many startups have embraced GitOps too. They don't have any legacy systems to accommodate. They're pretty much free to do what they want to do to have GitOps work for them.

However, for many of the more established enterprises adopting GitOps has been a slower undertaking. Big businesses come with a lot of infrastructure and varying degrees of automation. Some of the infrastructure might be virtualized; a good deal might not. Some companies with a lot of legacy equipment still have employees who have the job of updating firewall rules on older network appliances. Thus, the road to GitOps adoption will be slower in these types of places than in a lean and mean startup.

For many larger companies, this means moving GitOps beyond the operational to the architectural. GitOps started as a set of operational tasks for controlling application deployment. Now the practice has matured to the point where it's relevant to many if not all aspects of enterprise architecture, everything from application configuration to the definition of computing resources. As such, accommodating GitOps is work that needs to be done by architects designing large-scale systems and the developers who actually have to get it all to work.

For those architects new to GitOps, a good place to get up and running with the practice is to select a low-risk project that has or can support an

infrastructure as code approach. Such a project provides an architect the opportunity to hone the skills necessary to work competently in a GitOps environment. It takes a bit of practice to get accustomed to working in a GitOps environment. However, getting the hang of things without running the risk of blowing up a mission-critical system has obvious advantages.

Remember, the key to GitOps is that the Git repository is the sole source of truth. Subsequently, all manipulation of a project, applications, and environment emanates from changes made to the scripts and configuration files that represent the IaC approach. For some Enterprise Architects, adjusting to this way of thinking could take some time.

When taking a GitOps approach, architects need to make sure all the artifacts required to describe, represent, and control the IaC environment are accounted for. For example, architects need to ensure that deployment scripts that control the CI/CD process or the configuration files that define the virtual environment have the same level of detail and accuracy that the industry has come to expect from the database ERD or application object model diagrams that are typically part of the plans for an enterprise architecture.

Under GitOps, the infrastructure is only as good as the code representing it. So architects will do well to pay close attention to that code.

Putting it all together

​GitOps is the next step in the evolution of the automated enterprise. Today as more companies make the internet the foundation of their business operations, the ability to meet the demand for more software at faster rates has become a competitive advantage.

The competitive landscape can be brutal. The companies that can keep up will survive. Adopting GitOps can help meet the competitive needs at hand.

However, to get the full benefit from the power that GitOps can bring to the modern digital environment, it needs to be embraced at the architectural as well as the operational level. This means that architects need to take the time to learn the details that go with making the source code repository the sole source of truth and subsequently the focal point from which all actions upon the enterprise emanate. Learning the basics can take some time, particularly for those architects that typically work far away from an enterprise's operational activities. But, the competitive advantages the GitOps offers makes the investment worth it.

Topics:   Developer   DevOps   Programming   GitOps  
Author’s photo

Bob Reselman

Bob Reselman is a nationally known software developer, system architect, industry analyst, and technical writer/journalist. More about me

Navigate the shifting technology landscape. Read An architect's guide to multicloud infrastructure.

OUR BEST CONTENT, DELIVERED TO YOUR INBOX

Privacy Statement