The beginning

It started in 2011 as an experiment.

 

What if we stuck some technology up on a public cloud to see if we could build a platform addressing diverse workloads - from "traditional" Java apps to more lightweight and stateless microservices?

 

We hoped it would be popular, but more than that we wanted an early peek into how workloads were changing for different needs, and the benefits and challenges to running those diverse workloads on a public cloud. There were of course pioneers in this market - Heroku and EngineYard to name just two - but we wanted to cross the bridge from running stateless applications to more traditional, enterprise stateful apps on the same platform. And, we wanted to do this by attracting as many developers for free and take advantage of all we had learned working on Linux technologies for two decades.

 

So, we took advantage of core Linux technologies like cgroups, namespaces, SELinux to containerize applications and packed them densely together. We offered every developer three free gears (our descriptor for containers then) with no time limits to create any app they wanted - with their choice of language - and run it on our platform. We focused on integration with Git and Jenkins and a great user experience for developers. We provided a marketplace. And, we continuously improved and upgraded the platform based on user feedback. Our engineering and ops teams were doing DevOps before that term came into vogue.

 

That's how OpenShift Online came to life.

 

Then came the enterprises

As we traveled down the path of delivering a superior developer experience, some of the largest enterprises began to take notice. Developers within these enterprises wondered if they could move the same OpenShift technology in-house so their operations teams could run it with the privacy, governance and risk mitigation that private cloud provides. FICO was rethinking its business and wanted to build a cloud platform to reach SMBs who couldn't use its on-premise software. CA wanted to be more agile and nimble so it could respond to feature enhancements and technology advancements from emerging, cloud-based management technology providers. Cisco was interested in a single platform that had the flexibility to run Java and Ruby workloads, including mission-critical applications and dev/test for new services. They wanted an OpenShift of their own to run in their datacenter.

 

So, in late 2012, we packaged up the user experience and operations capabilities to deliver OpenShift Enterprise, which could be run anywhere Linux technology could be deployed - on bare metal, hypervisors and certified clouds - public and private IaaS.

 

The swirl

The last four years have been a dizzying journey. More than 2.5 millions applications have been deployed in our public cloud platform, OpenShift Online. To date, applications have doubled every year, and have added thousands of users every week. We feature these diverse applications in our Application Gallery - everything from document management to dynamic pricing/routing to an NFL Salary Cap simulator!

 

And it hasn’t gone unnoticed - we are extremely grateful for being recognized with several awards including InfoWorld’s 2014 Technology of the Year award. Along with success comes competition, which we think is healthy. It makes us work harder and deliver more value to our customers, who benefit as a result. It also serves to validate the market we are in - that we are working on solving problems that are meaningful.

 

The biggest compliment of course is the faith our customers have shown us - you can see a sample reference list here. They span the globe and the Fortune 500. From renowned global educational institutions like UNC and UTS, to European technology solution providers like T-Systems, to a Swiss retailer like LeShop, we continue to be inspired by how our customers are using our platform and collaborate closely with them to incorporate features they find valuable. Red Hat’s own IT team has been leading the charge with almost 1,100 applications deployed on OpenShift Enterprise and over 1,000 Red Hat associates logging into the system for applications ranging from training and labs to a conference room reservation mobile app.

 

We are also gratified by the support of our partners who build and deliver technology to enhance our customer’s experience. In just about three months, OpenShift Commons has spurred a group (now over 100 strong) that cuts across emerging technologies like big data and containers and the integrators that build solutions to deliver value to our joint customers. It has become a fast-growing community for collaboration across developing, deploying, managing and operating cloud platforms.

 

If it ain’t broke, don't fix it?

It would be far easier to continue on the path we put ourselves on more than four years ago, with a highly adopted public cloud platform and an established enterprise customer base. But, we pride ourselves on being an open source company that sees change before our users, and that collaborates with communities to develop the best possible solution for our customers. Taking the well-traveled road was never an option.

 

When we transitioned from OpenShift v1 to v2, our engineering team evolved our cartridge specification to make it more flexible. Cartridges are how we extend of our platform to integrate additional runtimes and applications. As we considered our next platform version, we asked ourselves - couldn't this be standardized so a customer didn't feel like it was bespoke every time they used a cloud platform? Customers were translating cartridges, build packs, AMIs and more, and that was fracturing technology providers and stymying users from adopting the platform.

 

Given that we had been using container technology from day 1 (containers are, after all, a Linux concept), it was easy for us to spot the promise of Docker packaging and API standardization early on. We got behind it with the next generation of OpenShift and included support with Red Hat Enterprise Linux 7.

 

Once we adopted a standardized container packaging format, we asked ourselves if we were going to continue down the path of a bespoke broker and orchestration technology. Conversations with Google made us realize that we would deliver a more powerful solution to customers if we collaborated with Google and shared our collective experience - Google with its knowledge from running massive scale of two billion containers a week and Red Hat with more than a decade solving heterogeneous enterprise infrastructure needs in data centers. Kubernetes is the open source project that both Google and Red Hat, along with others in the industry, contribute to address container orchestration and management.

 

The next generation of OpenShift

So was born OpenShift Enterprise 3. It incorporates the collaboration and energy around container packaging/API and orchestration and we are top contributors to both the Docker and Kubernetes projects to do our part to bring enterprise capabilities and features in the future direction of those communities. Our initiative to drive the Open Container Project is an example of the leadership we bring towards collaboration in the industry.

 

And, with OpenShift Enterprise 3, we are adding an application lifecycle and management capability set around it; support for continuous integration/delivery; developer workflow and access control; the ability to use various frameworks and tools; integration with networking APIs; stateful storage; and more. Integration with JBoss, Fuse and mobile technologies make it a rich platform for application services along with additional capabilities from certified ISVs. And, we’re delivering flexibility to the operations teams to have choices to run it on public and private clouds and use different networking solutions for their needs. Not only are we contributing actively in these open source communities but we are also adding enterprise requirements around integration, interoperability and flexibility.

 

We are incredibly excited to bring this new technology platform to the market. We now ask ourselves if “PaaS” as a descriptor for what we do has outlived its usefulness as the market has evolved over the last few years.

 

For a simplified analogy, consider the human body which is controlled by its nervous system. Our nervous system is composed of neurons that interconnect to form a neural network to receive and respond to events and patterns. The brain directs the activities that are made up of collections of neurons.

 

Applications are increasingly less monolithic and often better deconstructed as microservices. Those microservices are then packaged in a standard way and deployed as containers. Much like neurons in our body, containers are distributed and deployed at scale in a diverse and hybrid cloud. Developers can focus on building the best possible services and leave the orchestration and management to the system. Similar to a nervous system, OpenShift Enterprise 3 directs a collection of containers which are grouped to form microservices and further aggregated to create applications. They can be run in a distributed fashion across a variety of environments. And the OpenShift broker orchestrates and manages their deployment as part of a complex system of interconnected containers.

 

It’s a distributed application system.

 

 

Join us as the journey continues. It’s more exciting than its ever been.


About the author

Ashesh Badani is Senior Vice President and Chief Product Officer at Red Hat. In this role, he is responsible for the company’s overall product portfolio and business unit groups, including product strategy, business planning, product management, marketing, and operations across on-premise, public cloud, and edge. His product responsibilities include Red Hat® Enterprise Linux®, Red Hat OpenShift®, Red Hat Ansible Automation, developer tools, and middleware, as well as emerging cloud services and experiences.

Read full bio