Nearly 10 years ago this week, at the very first AWS re:Invent, Red Hat took a leap of faith with our technology portfolio. We built on the success of our flagship Red Hat Enterprise Linux and JBoss Middleware solutions and launched Red Hat OpenShift Enterprise 1.0, Red Hat’s fully open source, hybrid Platform-as-a-Service (PaaS) offering geared for enterprise developers.

Now, here we are a decade later. Open source is a key driver for all technology innovation, whether in the datacenter or in the public cloud. Kubernetes is the de facto standard for containers and cloud-native platforms and edge computing is making its presence felt in enterprise IT. Red Hat OpenShift has changed…just a little.

The evolution of OpenShift has been a journey, but one that’s far from over. We continue to extend to new platforms and use cases, add new capabilities, enhance existing features and break new ground to address emerging IT needs. All that said, in order for us to say where we’re going next, we need to understand where we’ve already been.

OpenShift’s origin

While Red Hat OpenShift 1.0 reached general availability in November 2012, the actual concept started much earlier, with initial development starting in early 2010, then accelerated by Red Hat’s acquisition of Makara later that year and featured in Paul Cormier’s keynote at Red Hat Summit 2011

Developers were flocking to new public cloud PaaS services like Heroku that enabled self-service application deployments and increased agility. CIOs, driven by the same desire to drive greater speed and agility for developers in their own organizations, saw value in PaaS as a pathway to standardizing enterprise application developer tools and environments and creating a common application foundation, regardless of the resulting app’s purpose or code base. They loved the concept of PaaS, but were limited in their ability to move their enterprise applications to the public cloud.

OpenShift Enterprise 1.0, as with Red Hat’s entire hybrid cloud portfolio today, enabled customers to run in any environment across the hybrid cloud, including their data centers or any public cloud. Built on a foundation of Red Hat Enterprise Linux (RHEL), OpenShift could be deployed anywhere RHEL was certified to run and used RHEL-based Linux containers to deploy applications, long before Docker was a thing.  

On top of this platform, developers had their choice of application languages and frameworks to deploy their applications, from enterprise Java with JBoss Enterprise Application Platform 6 or Tomcat, to Ruby, Python, Node.js and more. OpenShift was unique at the time, delivering a fully open source PaaS platform that delivered a slew of polyglot developer tools and capabilities, built on the stability and enhanced security of RHEL. 

As they say, however, nothing stays the same.

Gears, Cartridges and…Kubernetes

The next minor releases of OpenShift, as well as the major release of OpenShift Enterprise 2, added a range of features developed in the OpenShift Origin upstream community, from improved developer capabilities and greater administrative controls to tighter integration with OpenStack and other infrastructure platforms. OpenShift used gears to run your apps, which were essentially OpenShift’s predecessor to Linux containers as we know and love them today. 

Gears used underlying technologies in RHEL such as Linux kernel namespaces, cGroups, and SELinux to deliver a highly-scalable, containerized application platform with an enhanced security posture. OpenShift also added “cartridges” to bring new application runtimes to the platform; essentially third-party applications packaged to run in containers by Red Hat partners and validated to work on OpenShift. 

While a far cry from the capabilities of OpenShift today, this demonstrated the efficiency and agility that could be gained by using containers for application deployments, instead of having to issue virtual machines (VMs) for building and deploying applications and then managing the lifecycle of those VMs across a team of developers. This also showed how important our partner ecosystem was (and continues to be) to the success of the platform.

Everything changed with the rise of Docker containers and, a short time later, Kubernetes. The launch of the Docker open source project in 2013 would turn out to be a watershed moment in the technology industry. While Linux container technology had roots dating back to Unix and was already in use in solutions like OpenShift, Cloud Foundry and even Heroku, Docker made containers simpler and more accessible to developers and made it much easier to package new applications to run in containers. 

Red Hat was one of the first companies to join the Docker community and contribute to the project. RHEL then became the first commercial Linux OS to announce support for Docker containers. Red Hat later worked with the community to drive the Open Container Initiative (OCI) standard for container runtime and packaging format, which today is the industry standard for all containerized applications. 

Containers, and gears before them, weren’t much more than unique ways to package and run  individual application services. However, most applications required multiple microservices to be orchestrated and linked together to form a larger, more sophisticated app. This is where Kubernetes entered the picture as the standard for orchestrating and managing containers at scale. Red Hat quickly realized the importance of the project, joined Google to launch the Kubernetes community and with OpenShift 3, Kubernetes became the foundational engine for the OpenShift platform. 

Operate immutable platforms with Operators

The Kubernetes backbone of OpenShift was now set, but Kubernetes is, well, complex. In the early days of OpenShift 3, we worked hard to abstract away orchestration complexity, but application and platform maintenance could still be a problem at scale. In early 2018, Red Hat acquired CoreOS and with the company came its Tectonic platform, which integrated the CoreOS Container Linux operating system, and the concept of Kubernetes operators.

Kubernetes operators, built on Custom Resource Definitions (CRDs), provided a repeatable pattern or framework for maintaining and updating cloud-native applications, essentially treating all applications running on the platform as a Kubernetes-native service. Meanwhile, the integration of RHEL with CoreOS Container Linux spawned RHEL CoreOS, and made the operating system an integrated component of the OpenShift Kubernetes platform. RHEL CoreOS and the operator framework made it easier to deploy the OpenShift platform and deliver more complex services to the platform across the open hybrid cloud and, at Red Hat Summit 2019, Red Hat OpenShift 4 was released.

Today, OpenShift 4 is the industry’s leading enterprise Kubernetes platform highlighted by the comprehensive offering of Red Hat OpenShift Platform Plus, which also includes Red Hat Advanced Cluster Security for Kubernetes and Red Hat Advanced Cluster Management for Kubernetes. With a heavy emphasis on abstracting away complexity and creating repeatable patterns for service maintenance, OpenShift could now look to furthering how developers build, deploy and update their next-generation of applications.

Cloud services…and beyond

Even before the OpenShift 1.0 launch at AWS:Reinvent 2012, Red Hat deployed OpenShift as a beta public cloud service, to direct feedback from developers and learn what it took to run a managed PaaS cloud service at scale. As a foundation for the open hybrid cloud, OpenShift needed to run wherever and however customer needs dictated, whether that was fully managed or self-managed. Shortly after the OpenShift Enterprise 1.0 product was launched, Red Hat began launching fully-managed OpenShift cloud services, first with OpenShift Online, then with OpenShift Dedicated. This subsequently led to expanded collaborations with both AWS and Microsoft Azure on jointly managed OpenShift cloud services,

Today, Red Hat OpenShift Service on AWS (ROSA), Azure Red Hat OpenShift and Red Hat OpenShift on IBM Cloud highlight how the current iteration of OpenShift lends itself to a managed service model. IT organizations can gain the benefits of a powerful and innovative hybrid cloud platform, one that can translate readily with the same skills and tools across their own datacenter and multiple public clouds, with limited overhead and maintenance costs. The future of enterprise IT won’t be solely in the public cloud or in the datacenter - it’ll be a mix of these environments, and OpenShift is engineered to meet the demands of both extremes.

But we haven’t stopped there. As edge computing grows in importance to enterprises in nearly every industry, we’ve designed deployment models for OpenShift to meet these needs. This includes small cluster and single node server deployments with single node OpenShift, zero-touch provisioning for OpenShift-configured hardware or the newly-introduced Red Hat Device Edge (built on the MicroShift project) which brings Kubernetes from servers to edge devices. OpenShift is reinventing itself again to address the next wave of computing.

These are just the foundational elements of the platform - the capabilities for developers on OpenShift have also evolved to meet a variety of demands. Look at OpenShift Serverless, Service Mesh, Pipelines or GitOps as examples of how we’re meeting the new generation of cloud native application requirements. 

So what’s next? If you look at the history of OpenShift, where we’ve been, how we started and how we’ve changed, I think we can answer it this way. Whatever the next trend is in computing or however the enterprise demand for hybrid cloud changes, Red Hat OpenShift will exist at its core.

Interested in learning more about where Red Hat OpenShift exists in today’s cloud-native landscape? Read more here.