Tekton grew out of the Knative project and has subsequently been backed by Red Hat as being the future for all things pipeline in Red Hat OpenShift. Back in 2018, when I first heard about Tekton, my first reaction was “What problem are we trying to fix here?”. After all, I know Jenkins, and I like Jenkins—so why would I want to face the learning curve of new technology when what I have now already works well?

A Red Hat foi reconhecida como líder no relatório Magic Quadrant™ do Gartner® de 2023

A capacidade de execução e a visão abrangente da Red Hat garantiram à empresa o primeiro lugar na categoria Gestão de Containers do relatório Magic Quadrant™ 2023, do Gartner.

Whenever I asked the question “Why is Tekton better than Jenkins?” the most common answer is, “Tekton is cloud native,” usually followed by silence or a very quick pivot to something else. So I went away and looked for a clear definition of ‘cloud native’ expecting to have the eureka moment. 

In 2018 the Cloud Native Computing Foundation (CNCF) published this definition: “Cloud native technologies empower organizations to build and run scalable applications in modern dynamic environments such as the public, private and hybrid clouds.”

So, no obvious enlightenment is to be found there. I also form a definite irrational attachment to the things that have been in my toolbag for years that work well. In order for me to give faithful old Jenkins his cards, I felt that I needed to be far more convinced that the grass really is greener on the other side of the fence. Tekton needed to offer me something substantially beyond what I already had if I was going to move away from Jenkins. 

Ultimately, my conclusion was that in the OpenShift/k8s space, Tekton integrates better and opens up a potential for new opportunities that I don’t think Jenkins can necessarily offer.

This is discussed in the remainder of this article. If you are asking the same questions, then I hope you find some of the answers you are looking for.

My experience of Jenkins

Let’s be honest: The fine old institution that is, Jenkins is old. It first appeared around 2005 and hasn’t really changed much in that time. Its greatest strength is the huge selection of plugins that will allow you to easily interface with pretty much anything. But, this is also a weakness, because those plugins have an indeterminate software lifecycle. If there is an issue with them, then you typically have to just work around it. 

Jenkins is Java-based; it’s known to be memory and processor hungry—constantly running—and with the aggregated costs of compute resources, this might start to be viewed as a problem.

In the pre-container days, you would typically see ‘big’ Jenkins, where the whole dev department would use a single Jenkins server and this would consequently be the bottleneck. Since it would struggle to process the large load placed upon it, you would often be told that Jenkins had gotten itself in a massive tangle and would need to be restarted. Back to square one, then. 

Then came containers and now every team can own a Jenkins server and configure it just how they like. But this has given rise to another problem—’Jenkins sprawl.’ All those Jenkins servers chugging away not doing much most of the time and eating them out of house and home. Not to mention the many and varied types of pipeline code each team propagates.

So it would be nice to have something that had a very small footprint, that could be decentralized to each team while strongly encouraging all pipelines to look similar. Hold those thoughts, because we’ll come back to this again later.

About process sequencing models

In a software system, we need to have a way to organize the sequence of service calls/process stages to effect an outcome, and there are two recognized ways of doing this.

The first pattern is the Orchestration type—it is typified by the Process Manager pattern.

A photograph of a woman in a formal suit conducting an orchestra

Source (This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license.)

The Process Manager effectively behaves like a conductor in an orchestra. Hence the name orchestration— orchestra.

Jenkins is an example of this pattern. It behaves as a process manager—the conductor. You code the process using the JenkinsFile, and in doing so, you define the process. Whatever gets handed off by Jenkins via its many plugin-based interfaces returns back to the process manager as being completed, and the next stage of the process can commence. This appears in most cases to be a synchronous behavior. 

The Process Manager Pattern is described in Refactoring to Patterns (Kerievsky 2004) as being “inherently brittle.” This is because if someone decided to restart the process manager, then whatever it was running at that time would be broken. Hence, the design is known not to work particularly well with long-running processes. 

Now, let’s look at the second sequencing pattern. The following photograph is of a crowd doing the Mexican wave in a sports stadium.

A photograph of a large crowd at an event such as a concert or a football game.

Source. (This file is licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license.)

Each member of the crowd knows when they have to stand up and raise their hands—no one is directing them. In a large football stadium, there is just too much happening for orchestration to ever be able to deal with this. This is an example of the next sequencing pattern type called Choreography.

Orchestration is the obvious design choice that most developers will first settle on, but the not-so-obvious choreography is generally the greatly improved rewrite/design that works really well in conjunction with events.

This is very relevant in a world of microservices where hundreds of services could potentially need to be sequenced. But equally, in a world where long-running stand-up, test and tear-down type pipeline activities might be happening, the choreography style is a far more robust and practical model.

Tekton pipelines are built from separate containers that are sequenced via internal Kubernetes events on the K8 API server. They are an example of the event-driven choreography sequencing type. There is no single process manager to lock up, get restarted, hog resources or otherwise fail.  Tekton will allow each pod to instantiate when it is needed in order to perform whatever stage of pipeline process that it’s responsible for. When it finishes, it will shut down, freeing up the resources it used for something else.

The ideal of loose coupling and cadence through reuse

Tekton also exhibits what is described in software architecture as loose coupling, consider the following picture of a child’s train set:

A photograph of a wooden toy train.

"Toy train for wood tracks" by Ultra-lab is licensed under CC BY-SA 2.0

The connection between the train and its carriages is magnetic so the child might use the train just on its own or with every carriage attached to it and, of course, all options in between. 

The child can also change the order of carriages; for example, light green and yellow could be reversed if they so wish. If they had a second similar train set, then all the carriages from one set could be added to the other to make a ‘super train.’

This shows why “loose coupling” is the optimum architecture for software design. It inherently promotes reuse, and this is very much a part of the ethos behind Tekton. The components once built can be shared between projects very easily.

While on the subject of coupling, we should explore the reverse of loose coupling, which is tight coupling. Let's now consider another child’s toy:

A photograph of a green and yellow wooden toy caterpillar with a red string attached for pulling it along.

"09506_Pull-Along Caterpillar" by PINTOY® is licensed under CC BY-SA 2.0

The caterpillar also has multiple sections, but they are fixed, meaning the order cannot be changed. You cannot have fewer segments, and more cannot be added to make a super caterpillar. If the child wanted only three segments on his caterpillar, they would have to convince their parents to buy a different caterpillar.

In this example, we can see that tight coupling does not promote reuse, but actually forces duplication.

Yes, I agree that Jenkins pipelines can be loosely coupled and code can be shared between projects. But this is not a given—it largely depends greatly on how the pipelines have been designed. 

The alternative is to use one Jenkins pipeline for all projects. But, this is restrictive, and all too quickly, you will meet one project that has differing needs from the one-size-fits-all approach. Meaning, the integrity of the single pipeline design gets violated. 

Tekton is declarative in its nature, and its pipelines very much do resemble the wooden train toy example. Superficially, you can visualize Tekton’s high-level object ‘Pipeline,’ as being the locomotive. The pipeline contains a set of tasks that you can think of as the carriages. Different pipelines can contain the same tasks, just re-parameterized and reused. So, loosely coupled, parameterizable tasks get chained together to form pipelines. Tekton strongly promotes loose coupling, and hence the opportunity for reuse comes about directly. Meaning, the work from one project can be picked up, snapped in and used somewhere else.

Tekton is also just additional Kubernetes object definitions, meaning that they sit very easily with an everything-as-code approach and GitOps. The pipeline code could easily be applied to the cluster along with the other cluster/namespace configurations. 

So why does all this matter?

It matters, because the notion of formal dev, test and UAT type environments is largely an old-world concept. These fixed environments date back to having to purchase physical servers and then designate their use. 

This is how it used to be done. In the world of OpenShift and everything-as-code, pets vs. cattle, etc., there is no reason why these environments cannot be instantiated dynamically using pipelines and tests then run. Once completed, these might be torn down again completely to make way for other tests, etc. 

This is definitely not yet the case for most companies. Perhaps one of the reasons is because the tooling we have had available to date has not been a particularly good fit.

In conclusion

The world is still playing catch-up with all of the opportunities that technology like OpenShift offers us that simply were not possible in the past. 

With all those compute resources that spend, at the very least, most of their evenings idling, the opportunities to automate testing to deliver better software are endless. I have long been kicking around these types of ideas, but have never felt that Jenkins was the right tool for this type of work; on first meeting Tekton with an open mind, I quickly realized it felt like the perfect fit. 

Tekton integrates from the ground up with the Kubernetes API and security model and strongly encourages loose coupling/reuse. It is event-driven, following the choreography model, so it’s good for controlling the long-running testing-type process. The pipeline artifacts are just additional Kubernetes resources—pod definitions, service accounts, secrets, etc.—that can easily lend themselves to the world of everything-as-code, aligned with the rest of the k8-type ecosystem.

Each application team can have its own pipeline code, which, when it's not running, flattens down to nothing, so the advantages of ‘distributed Jenkins’ without all the idling loads. Apache Maven became so successful, so quickly, because it was a game changer. It imposed a regimented way of laying out a java project, which meant devs could easily figure out the build configuration between projects. Tekton does the same with pipelines and makes the reuse of tasks easy and obvious.

Cruise Control was the first CI tool in the early 2000s, and from personal experience, it was hard to use. At the time, Jenkins felt like a radical improvement on what went before. In the world of OpenShift/k8s, Tekton very much feels like the next step forward in pipeline technology.

Learn more