Entering 2015, the Linux container ecosystem was faced with the very real danger of fragmentation, as competing standards and formats threatened to undermine the application portability promise presented by containerized applications and supporting infrastructures. This technological siloing could easily lead the IT industry back to the bad old times of expensive, custom-built technologies and limited, if any, interoperability between many competing solutions. This concern of fragmentation and lack of interoperability caused some customers to slow their own technology explorations down, from adoption to “wait and see.” To continue pushing adoption forward and achieve wider enterprise recognition, Linux containers required standardization, preferably the creation of standards built from open source principles and backed by the larger IT industry. This, however, was easier said than done.

Complicating the issue is that Linux containers do not fall under a single standards category; the technology is simply too complex for one singular overarching “container” standard. Instead, it’s best to look at container standardization from three distinct points of view, each corresponding with a layer of the container, or LDK, stack:

  • L(inux) - if the technology name didn’t give it away already, the Linux operating system forms the underpinnings of Linux container technology. While containers require only the bare essentials of an operating system, what they do need for isolation, security, resource allocation and process management is provided by a Linux platform, .

  • D(ocker) - the most hyped part of the container stack is the image format, of which Docker is the most common today - others exist, however, hence the need to develop standards for interoperability around existing and to-be-created formats. These standards must allow for different runtime tools and process managers to instantiate and manage containers through a standard set of APIs.

  • K(ubernetes) - at the top of the stack is the orchestration layer, where disparate containerized application services are meshed together into complex composite applications. Much like Docker, Kubernetes is most commonly used, but other orchestration engines have emerged. Further, orchestration goes far beyond containers and touches other technology areas like OpenStack (with Heat templates), Hadoop (with Yarn), and even competing technologies like Apache Mesos and Docker Swarm. Hence, standards that drive interoperability at the orchestration level are crucial.

Rather than allow fragmentation to quell the momentum of Linux containers, the IT industry at large rallied around two industry organizations, which both aim to set standards for their respective aspects of the Linux container stack:

  • The Open Container Initiative (OCI), which aims to codify and standardize container formats and runtimes and;

  • The Cloud Native Computing Foundation (CNCF) which formed around the full stack needed to support modern applications delivered as a composition of containerized (micro-) services. CNCF plans to leverage OCI for container image and runtime definition, and is including Kubernetes for container orchestration.

It’s important to note that standards in the IT world often carry the reputation of being slow to evolve, overly complex or not grounded by implementation experience, and controlled by only an elite. We are working to change this by bringing together the best of open standards and open source development practices. Through the open source nature of containers and the technology’s supporting projects, normally competitive organizations, individual enthusiasts and users/customers can collaborate directly at the technology level by leveraging the very nature of open source: code can be quickly written, reviewed with transparency and have standards established based on it in a fraction of the time it would take to amend early computing standards.

Helping to drive both of these initiatives as well as the overall notion of open source standards is Red Hat. We have led the open source world when it comes to delivering common technology architectures and cut our teeth building a standards-based, enterprise-grade Linux platform with Red Hat Enterprise Linux, which still serves as one of the leading models for Linux in the datacenter. So it shouldn’t be any surprise that we took an active role in helping to lay the foundation for both OCI and the CNCF.

Both of these organizations came together over a few short months in 2015 with the expected fanfare of bringing order to the perceived chaos of the Linux container Wild West. But where do

things actually stand with both of these organizations and their respective standards? Are we moving forward fast enough? And what, exactly, is Red Hat’s take on standards for the least-talked about (albeit arguably the most important) piece of the stack - Linux?

Over the coming weeks, we’ll delve into the organizations, technologies and emerging standards driving each piece of the LDK stack. So stay tuned - standardization is coming for containers...but is it coming fast enough?


About the author

Chris Wright is senior vice president and chief technology officer (CTO) at Red Hat. Wright leads the Office of the CTO, which is responsible for incubating emerging technologies and developing forward-looking perspectives on innovations such as artificial intelligence, cloud computing, distributed storage, software defined networking and network functions virtualization, containers, automation and continuous delivery, and distributed ledger.

Read full bio