In our first post defending the pet container, we looked at the challenge of complexity facing modern software stacks and one way that containers address this challenge through aggregation. In essence, the Docker “wrapper” consolidates the next level of the stack, much like RPM did at the component level, but aggregation is just the beginning of what the project provides.
If we take a step back and look at the Docker project in context, there are four aspects that contribute to its exceptional popularity:
- it simplifies the way users interact with the kernel, for features we have come to call Linux containers;
- it's a tool and format for aggregate packaging of software stacks to be deployed into containers;
- it is a model for layering generations of changes on top of each other in a single inheritance model;
- it adds a transport for these aggregate packages.
These aspects of containers have triggered triggered a full paradigm shift in how we look at the operating system (OS), one driven by Red Hat and many other IT leaders contributing to the Docker project. The usual squabbles over positioning and control that come with such a shift notwithstanding, the future of the OS is defined by containerization.
The Docker project’s model has advantages over the previous approaches to Linux containers, as we outlined previously, for several reasons. The combination of aggregate packaging, runtime isolation of containers, and a layering model is extremely powerful. This enables us to solve a number of the core issues of the the existing component level approach, particularly those that have been hurting the IT departments seeking to implement DevOps around more traditional application stacks. It helps break the eternal spiral of change that is the model of updating components in production.
From Red Hat’s perspective, we are building an entire strategy around this layered inheritance model, with products like OpenShift, Red Hat CloudForms, and the Container Development Kit (CDK) for the delivery of a full application lifecycle workflow encompassing everything from development and build scenarios to testing and production.
With all of this in mind, a best practice around Linux containers is clearly to *always* build your container using a build service, and to *never* run yum inside the instantiated production container, right?
That is, unless it isn't.
Even with the advantages that aggregate packaging offers, we should not forget the original use case: to build containers in a build service and use them as aggregate binary packaging to enable repeatable builds and predictable behavior - just like RPM on the individual component base.
But the dirty secret of containers is that it's not always the right way to approach your delivery process.
What if I am looking to simply have a traditional Linux host with multiple runtimes in parallel, without fundamentally changing my operational model? Containers in the Docker format do *not* imply automatically moving to immutable infrastructure and building everything into a container image through a build service.
So how does this impact the use cases for Linux containers? And how, exactly, does everything that we’ve talked about lead-in to a use case for pet containers? Stay tuned for the next post, we we’ll dig into where (and how) pet containers should play into the enterprise IT landscape.
Über den Autor
Daniel Riek is responsible for driving the technology strategy and facilitating the adoption of Analytics, Machine Learning, and Artificial Intelligence across Red Hat. Focus areas are OpenShift / Kubernetes as a platform for AI, application of AI development and quality process, AI enhanced Operations, enablement for Intelligent Apps.
Nach Thema durchsuchen
Automatisierung
Das Neueste zum Thema IT-Automatisierung für Technologien, Teams und Umgebungen
Künstliche Intelligenz
Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen
Open Hybrid Cloud
Erfahren Sie, wie wir eine flexiblere Zukunft mit Hybrid Clouds schaffen.
Sicherheit
Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren
Edge Computing
Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen
Infrastruktur
Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen
Anwendungen
Entdecken Sie unsere Lösungen für komplexe Herausforderungen bei Anwendungen
Original Shows
Interessantes von den Experten, die die Technologien in Unternehmen mitgestalten
Produkte
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud-Services
- Alle Produkte anzeigen
Tools
- Training & Zertifizierung
- Eigenes Konto
- Kundensupport
- Für Entwickler
- Partner finden
- Red Hat Ecosystem Catalog
- Mehrwert von Red Hat berechnen
- Dokumentation
Testen, kaufen und verkaufen
Kommunizieren
Über Red Hat
Als weltweit größter Anbieter von Open-Source-Software-Lösungen für Unternehmen stellen wir Linux-, Cloud-, Container- und Kubernetes-Technologien bereit. Wir bieten robuste Lösungen, die es Unternehmen erleichtern, plattform- und umgebungsübergreifend zu arbeiten – vom Rechenzentrum bis zum Netzwerkrand.
Wählen Sie eine Sprache
Red Hat legal and privacy links
- Über Red Hat
- Jobs bei Red Hat
- Veranstaltungen
- Standorte
- Red Hat kontaktieren
- Red Hat Blog
- Diversität, Gleichberechtigung und Inklusion
- Cool Stuff Store
- Red Hat Summit