At Red Hat, our involvement in open source technologies does not just revolve around code commits and community stewardship; one important focus is on the creation of standards. It may sound boring, but open standards applied to emerging software technologies can go far in not only fostering adoption but also helping to further drive innovation.
Open standards and the governance model of open source projects are closely related. The best projects create innovation and ubiquity by becoming the defacto standard for a given set of problems, absorbing and aggregating the many agendas and needs that drive their contributors. Our approach to open standards is demonstrated by the "power of code," developed in the open, unlike abstract documents negotiated in the backroom.
At a basic level, standards minimize the risk of a technology becoming fragmented and interoperable - fragmentation is effectively a "community killer," especially in the early days of an open source project. Without codification and standardization, code commits do not adhere to an overall goal and are often designed to only fit use cases that apply to a specific committer. The absence of standards also makes the wider enterprise world hesitant to adopt new technologies. Manageability and security are key concerns to these adopters - and standards enable the interoperability and consistency necessary to future-proof emerging technology investments.
This brings us to containers, arguably the hottest emerging open source technology in the past two years within the business world. While Red Hat fully embraces the innovation of Linux containers (and we helped drive several of the critical technology pieces upon which containers are based), we are also committed to the development and adoption of four key standards areas within the Linux container community.
The first is one that Red Hat has driven for many years: the isolation of Linux containers through control groups (cgroups), kernel namespaces, SELinux, and other capabilities that form not only the backbone of containers but most Linux distributions and PaaS platforms in general. This capacity for isolation is one of the key factors that make containers so valuable in the enterprise world, and the primitives live in the Linux kernel, all pushed forward by Red Hat, our partners and the community at large.
The second standard we have embraced and helped to make ubiquitous is Docker as a container format. Docker provides the ability to package the application with all its dependencies into a single image, and has been adopted by a large community of users. This creates a huge opportunity for the industry to collaborate on driving a single standard for container images instead of fragmenting into various competing formats. Rallying around a single format for development, distribution, delivery, and deployment, the container lifecycle becomes a much simpler prospect in the enterprise world -- from customers to application and infrastructure vendors and service providers -- by removing the complexity and friction that several competing formats would bring. The most obvious example of the need for simplification is security. The security of containerized application deployments is largely defined by the ability to manage the life cycle of what’s deployed inside containers and format fragmentation would introduce unnecessary overhead, cost and risk, making it harder to prevent attacks and breaches.
The third standard that we are helping to drive is targeting the orchestration of multi-container applications. Kubernetes, an orchestration framework that was open sourced by Google in July 2014, solves a number of problems, allowing users to describe an application as a combination of multiple containers working together and to instantiate an application to meet scale and other scheduling requirements while ensuring that all of the container components and microservices work together and can "talk" to each other. This orchestration must address resource management across a cluster of hosts, and also consistently create network connectivity between these container-based application services. Ultimately, each component in a distributed application must interact with the others without broadcasting to everything else in the world, integrating with the isolation provided by the underlying Linux infrastructure. Kubernetes with its pluggable architecture can integrate with richer scheduling technologies - without introducing application incompatibility because of how the orchestration primitives of clustering, connectivity and instantiation are handled.
Red Hat looks to Kubernetes as the orchestration standard for containers because:
We respect Google’s design choices, recognizing their experience with building and running large-scale, distributed container infrastructure for many years.
It’s open source with an open governance model - currently, Red Hat is the second largest contributor to the project after Google.
It is flexible and extensible. Kubernetes can provide a consistent orchestration standard across many different environments, from on-premise to public clouds, while at the same time be able to optimize for each of these environments. We believe that this makes it a good choice for enterprise IT looking to combine modernizing legacy with the push for innovation in a consistent manner.
Lastly, standardization is also critical when it comes to the software distribution architecture: registry, repository, or index for containers, which defines the protocol for exchanging, consuming, and publishing container content. This addresses both the mechanics of distributing bits - making available, consuming, verifying and reporting - as well as the problem of a consistency and interoperability across a wide range of repositories and workflows and their attributes of trust, scale, performance and cost. A technology standard that unifies the mechanics and at the same time enables choice with the ability to federate multiple registries is important for not only Red Hat’s customers and partners, but also for the enterprise world in general. Ultimately, enterprises will remain skeptical of public container registries until they see some evidence of standardization and certification, not just of the containers themselves, but also of the registries that host this content.
While we are actively working to drive the standards mentioned in this blog, it’s important to note that we are actively involved in many other related technologies - typically through transparent participation in open source communities. When making technology decisions, Red Hat evaluates available options with the goal of selecting the best technologies that are developed in upstream communities. This is why Red Hat is engaging upstream in appc to actively contribute to the specification. This involvement does not diminish our existing commitment to container standards, nor does it imply that our involvement in other community projects mean that we are embracing or supporting them in our products.
Rather, we’ve learned through nearly 20 years of open source leadership that the best path to great ideas is to participate in the innovation of upstream projects and to help drive interoperability and synergies across various projects. In some cases, this means bringing diverging communities together to create a best-of-both worlds effect, aimed at establishing a standard as a means to ensure interoperability and make the solution worth more than just the sum of the pieces. Red Hat’s involvement in the Application Container Specification is an example: We aim to foster the emergence of standards for securely launching containers, independent of the specific technologies working together - without fragmenting the container format and distribution method that is used to deliver the application payload.
Putting it all together: the opportunity with containers is to create a better way to deliver applications, traditional and cloud-native, across different environments while enabling a multifaceted ecosystem of containerized applications, services and related tools. With our extensive history of delivering open source solutions specifically for the enterprise, we at Red Hat believe that the key technologies forming the core container infrastructure need to be open standards, driven by healthy projects and vibrant communities, to minimize the risk of fragmentation and the issues it creates for innovation and adoption. The core tenets of container infrastructure are the primitives for creating, distributing, running and managing containers, and we work towards avoiding fragmentation at this level, with the end goal of enabling innovation and customer value and delivering an answer to businesses concerned about adopting an "emerging" technology like containers.
About the author
Lars Herrmann is always found at the forefront of technology. From the early days of Linux to today’s digital transformation built on hybrid cloud, containers and microservices, Lars has consistently helped enterprises leverage open source technologies to drive business results. At Red Hat, Lars leads Red Hat Partner Connect, Red Hat's technology partner program.