Abonnez-vous à notre blog

It's been just over three years since Solomon Hykes presented the world with the (so far) most creative way to use the tar command: the Docker project. Not only does the project combine existing container-technologies and make them easier to use, but its well-timed introduction drove an unprecedented rate of adoption for new technology.

Did people run containers before the Docker project? Yes, but it was harder to do so. The broader community was favoring LXC, and Red Hat was working on a libvirt-based model for Red Hat Enterprise Linux. With OpenShift 2, Red Hat had already been running containers in production for several years - both in an online PaaS as well as on-premise for enterprise customers. The model pre-Docker however was fundamentally different from what we are seeing today: rather than enabling completely independent runtimes inside the containers, the approach in

OpenShift 2 and libvirt-lxc was to partition the host, re-using the software installed on the host-machine. There were several issues with this model, however, with the most prominent being complexity. Modern deployments are so complex that the process of recreating an application stack (from a puppet manifest, for example) over and over again in dev / test / ops has become too fragile.

This mirrors the problem that we faced with the predominant operational model roughly 20 years ago, when we moved from compiling software on local machines to pre-build binary distribution with rpm. The issue we solved in the “olden days” was that the behavior of a locally compiled application was dependent on the state of the machine at build time and the overhead of this model. We needed binary distribution to achieve a predictable experience of the aggregate software stack.

Today, stacks are so complex and changes in software streams so frequent, that the stack you build is neither what you test nor is what you end up running in production; adding on top of this is the demand for updating applications/systems in place. This brings us back to a situation where the behavior of a production software stack simply becomes dependent on too many variables.

So how do containers, specifically the packaging as provided by the Docker project, marginalize if not outright eliminate these variables? By partitioning and aggregating, of course, which leads to a whole other set of challenges and solutions...but that’s for my next post.


À propos de l'auteur

Daniel Riek is responsible for driving the technology strategy and facilitating the adoption of Analytics, Machine Learning, and Artificial Intelligence across Red Hat. Focus areas are OpenShift / Kubernetes as a platform for AI, application of AI development and quality process, AI enhanced Operations, enablement for Intelligent Apps.

Read full bio

Parcourir par canal

automation icon

Automatisation

Les dernières actualités en matière de plateforme d'automatisation qui couvre la technologie, les équipes et les environnements

AI icon

Intelligence artificielle

Actualité sur les plateformes qui permettent aux clients d'exécuter des charges de travail d'IA sur tout type d'environnement

cloud services icon

Services cloud

En savoir plus sur notre gamme de services cloud gérés

security icon

Sécurité

Les dernières actualités sur la façon dont nous réduisons les risques dans tous les environnements et technologies

edge icon

Edge computing

Actualité sur les plateformes qui simplifient les opérations en périphérie

Infrastructure icon

Infrastructure

Les dernières nouveautés sur la plateforme Linux d'entreprise leader au monde

application development icon

Applications

À l’intérieur de nos solutions aux défis d’application les plus difficiles

Original series icon

Programmes originaux

Histoires passionnantes de créateurs et de leaders de technologies d'entreprise