This post originally ran on the OpenShift blog on Saturday, June 6, 2020.
Happy Birthday, Kubernetes!

Saturday marked the 6th anniversary of the commencement of the Kubernetes Project. While there are other milestones coming up this summer, such as the first time code was checked in for the project, we thought we’d use today as an opportunity to share some of Red Hat’s favorite memories from the first six years. Here’s to many more!

Ashesh Badani, senior vice president, Cloud Platforms, Red Hat

We made a huge bet on Kubernetes and its ecosystem well before it was ready; perhaps even before we were ready. While we’ve had OpenShift in the market since 2012, we knew that we were lacking the “flywheel.” When we had the chance to rearchitect for a standardized container runtime, format and orchestration, we went all in. Even as we were making OpenShift 3 generally available, it was incredible to partner with Amadeus who launched their own service platform on OpenShift even as we released to market!

Over the last five years, its unbelievable to see the kinds of organizations that have come on the journey with us and the larger community. The world’s largest banks, airlines, hotels, logistics companies and even governments have completely embraced the path and entrusted mission critical applications to the platform. Now, we are seeing analytics and AI/ML use cases proliferate. We couldn’t have imagined all of this five years ago. But my favorite memory was going back to our earliest customers, thosethose, who adopted OpenShift pre-Kubernetes, and explaining what we were doing with container orchestration. What we didn’t expect but were overjoyed to see was acceptance by almost every one of them in regards to this future direction as well as a commitment to migrate to a Kubernetes-based architecture. The rest is history: 2000 customers and counting with deployments in every cloud environment.

Clayton Coleman, architect, Containerized Application Infrastructure for OpenShift and Kubernetes, Red Hat

When Red Hat was still evaluating container orchestration systems, well before the space settled around Kubernetes as the de facto standard, Google suggested that I become one of the first external committers to the project. This was a fantastic time for myself, for other Red Hatters involved in the project and for the community at large. The team at Google open sourcing the technology fully understood the problem domain, and Red Hat brough extensive experience in platform-as-a-service thanks to our background in OpenShift and open source in general, really in building simple systems that make it easier to get things done. Combined, we knew what enterprise organizations needed from Kubernetes and we had a plan to get there.
This was a great initial seed for a project: a bunch of people who understood the problems that they wanted to go solve and had a mandate from their companies to collaborate in the open to build something that could solve these problems. The mix of individuals involved was just great - for the first year and a half, it was people with a vision, a mandate and the experience in knowing what didn’t work before. Not everything was going to work right from the get-go, but we were able to build a very solid core. We have a responsibility as an open source community to keep our projects working, especially as adoption ticks up. It’s been a pretty crazy journey, and I’m pretty proud of what we’ve built. If we succeed, it’s because you can build on top of Kubernetes - it’s a foundation for future innovation.

David Eads, senior principal software engineer, Red Hat

When I think about the parts of Kubernetes that I'm most proud of developing, it's the open ended pieces that allow other developers to create things I haven’t thought of before. Things like CRDs, RBAC, API aggregation and admission webhooks. These took a lot of investment and a significant amount of coordination across the community to produce. While they seem obvious now, at the time it was a “build it and they will come” plan and they definitely have.

Building on top of what these primitives provide, we’ve seen entire technology stacks develop. Things like operators, self-hosted deployments, certificate management and new storage extension mechanisms. Looking at recent enhancement proposals, you can see the new activity around multi-cluster management and network extensions.

I’m looking forward to seeing how the community expands the extensions we have and the new features they will provide and enable.

Derek Carr, distinguished engineer, Red Hat

When I reflect back on the history of Kubernetes, I think about how lucky we were as a community to have a strong set of technologists with a diverse set of technical backgrounds empowered to work together and solve problems from fresh perspectives. Kubernetes was my first engagement with open source, and I remember reading pull requests early in the project with the same excitement as watching a new episode of my favorite television show. As the core project surpassed 90k pull requests, I am no longer able to keep up with every change, but I am extremely proud of the work we have done to build the community. Often I think back to the earliest days of the project and remember how its success was anything but guaranteed. As an engineering community, we were exploring the distributed system problem space, but as a project, our success was really tied to a set of shared values that each engineer lived out.

One of the earliest contributions I made to the project was the introduction of the Namespace resource. When I looked back at the associated pull request validation code, it looks like it may have been the 4th API resource in the project, and the first API added through the open source community. It required the original set of maintainers to trust me, and that trust was earned through mentorship for which I am forever grateful. Implementing namespace support required building out concepts like admission control, storage apis, and client library patterns. These building blocks have evolved into custom resource definitions, webhooks, and client-go with the help of a much larger community of engineers empowered by generations of leaders in the project. This has enabled a broad ecosystem to build upon Kubernetes to solve distributed system problems for a broader set of users than we ever imagined in the core project.

When I reflect on this early experience around the introduction of Namespace support, it highlights the Kubernetes project values in action - distribution is better than centralization, community over product or company, automation over process, inclusive is better than exclusive, and evolution is better than stagnation. As long as Kuberentes lives these values, I know we will celebrate many more birthdays in the future.

Joe Fernandes, vice president and general manager, Core Cloud Platforms, Red Hat

I’ve written a lot about Kubernetes over the past 6 years. From why Red Hat chose Kubernetes, to how Red Hat was building on Kubernetes, to eventually launching OpenShift 3 at Red Hat Summit 2015, which we had completely rebuilt from the ground up as a Kubernetes native container platform. We also had to unexpectedly launch OpenShift 3 on Kubernetes .9 when the 1.0 release slipped beyond our launch date. But my favorite memories of Kubernetes came when presenting it to customers, both leading up to and since that launch.

Red Hat has a highly technical and informed customer base, who don’t just want to know what a product does in terms of features and benefits, but all the low level details of how it works. I’ve always appreciated that, but it does keep our Product Managers and Sales teams on our toes. We prepared presentations that described the key capabilities of Kubernetes from pods, to services, replication controllers, health checks, deployments, scheduling, ingress and more. Kubernetes can be difficult to understand at first, even for the most technical users. But eventually there is that moment when it starts to click and you realize the power of these basic primitives and the automation they can bring to your application deployments. I was lucky to witness that moment many times, over numerous customer conversations over the past 6 years. Then the continued good fortune to see many of those customers become OpenShift customers and realize those benefits for some of their most challenging, mission critical applications. These were my favorite moments and still what motivates our entire product team today.

Maciej Szulik, software engineer, Red Hat

Kubernetes was my first big introduction to open source contributions. I've had a few patches here and there previously, but all of them were minor fixes. I still remember the moment when I was talking with my friends and telling them that I’m doing the exact same thing as I did in one of my previous projects, but this time it is open and widely available. And most importantly I can gather feedback from many, many more developers and users than I ever did before. The contributions I’m talking about are Cron Jobs, which then was called Scheduled Jobs and the initial version of auditing that nobody probably remembers now with the fancy advanced audit capabilities we have.

The project allowed me to grow both professionally and personally. I learned a ton about distributed systems, programming in Go and beyond. I’ve made many friends around the globe which I’ve had the privilege to see almost every KubeCon for the past several years. I’m really grateful for the opportunity I had to be part of this amazing journey and I can’t wait to see where the next years will lead us!

Paul Morie, senior principal software engineer, Red Hat

I have a lot of very fond memories of adding some of "core" API resources and features like Secrets, Configmaps and Downward API that still give me a little buzz of nostalgia every time I see people using them.

As a developer I also have my favorite changes or refactors and moments associated with those that I treasure. One that comes to mind was seeing the "keep the space shuttle flying" comment I wrote get a lot of social media traction (years after it was written). It was written as part of an overhaul of the persistent volume system, basically as a note to future developers to be careful about attempting simplification of logic (since doing so had confused us in the community during the overhaul). A couple years later someone came across my note and found it amusing enough to share and there were some fun discussions in social media about it. Someone also made a very cute picture of a space shuttle that puts a smile on my face when I see it.

Another very satisfying thing for me personally is a refactor to the kubelet that took several releases to get completely done but seems to have been durable in time. I think I was chasing a bug with PodIP in the Downward API when I realized that I had taken an extremely convoluted path through the five thousand plus lines of code of the kubelet.go file and became possessed by a desire to bring some order to the chaos there. Gradually I was able to refactor this enormous file into smaller, intentionally ordered files that made things (I hope) easier to understand and maintain. No crazy hacking or anything, just moving code between files, but it sticks out in my memory.

Beyond the things that we did in the Kubernetes community that were great, I would also like to call attention to things that Kubernetes didn't do that I feel are a part of its success. To me, the fact that Kubernetes does not mandate an official build engine for container images, configuration language, middleware, etc., is a huge win and part of why it has been adopted so broadly. These outcomes were in no way a given and we had fantastic community leadership that made smart (sometimes tough) choices about managing the project scope.

Instead of trying to solve every problem under the sun (or least those popular at any given time), the Kubernetes community made smart investments to make open-ended extensions of Kubernetes possible and, slightly later, easier. In 2016, for example, we had very limited choices when developing APIs for the service-catalog SIG outside the kubernetes/kubernetes project and essentially had to write our own API server. Now, I can write a custom resource definition (CRD) and in a few minutes have a functional API with a much much lower level of effort. That's pretty incredible!

The investments the Kubernetes community made in these extension mechanisms is a key part of the success of the project. These have enabled not only integrations with a number of different stacks to allow a large adoption footprint, but have also facilitated the creation of entirely new ecosystems. The Knative project and the Operator ecosystems, for example, would simply not exist in the same form and with the same possible user base to address without the well-developed extension mechanisms we have today.

Rob Szumski, product manager, OpenShift, Red Hat

A few moments stick out to me in the Kubernetes journey for the major shifts that they allowed the project to take. First was the huge scalability improvements that were driven by the collaboration between the etcd and Kubernetes community around etcd 3’s switch to gRPC for communication with the API server. This change drove down scheduling time dramatically on 1000 node clusters and expanded scale testing to succeed against 5000 node clusters where it previously failed. Great results for development that largely took place outside the Kubernetes code base.

The next large shift in Kubernetes capability was the idea that Kubernetes could be “self-hosted.” Self-hosted means running the Kubernetes control plane and assorted components with Kubernetes itself. This unlocked the ability to have the platform manage itself, which was pioneered in CoreOS Tectonic and was brought to OpenShift. Ease of management is key as we see Kubernetes deployed far and wide across the cloud. Keeping hundreds or thousands of clusters in an organization up to date can only be done through automation within the platform itself.The last major event was the introduction of the Operators concept in conjunction with CRD extension mechanism. This unlocked a huge period of workload growth for distributed systems and complex stateful applications on Kubernetes. This extension of your cluster with the experience of a cloud service running anywhere Kubernetes can run is essential to a hybrid cloud.