In a recent blog post on the appc spec, I mentioned Project Atomic’s evolving Nulecule [pronounced: noo-le-kyul] spec as an attempt to move beyond the current limitations of the container model. Let's dig a bit deeper into that.
Containers are great. Docker introduced the concept of portability with aggregate application packaging to the container space. Since then we have been on the path to fundamentally changing how complex software has been developed and distributed over the last 20 to 40 years. This shift of paradigm is just beginning, but already it's impact can not be ignored.
The reason for this success is a problem that has been obvious to many of us for some time: the modern, open source-based application stack has become far too complex to be able to project it onto the traditional monolithic single-instance / single-version user space model of legacy UNIX. This problem has been made worse by the way binary code distribution has been implemented in popular package managers like rpm and dpkg.. Due to the broad set of options developers can choose from when building applications combined with the high rate of change in modern software, the idea that a common binary runtime environment can subject every application to its standards has outlived it’s usefulness.
Aggregate packaging of applications for deployment into containers solves these issues. - Or does it? - This is where we currently encounter the limitations I mentioned:
The Docker packaging format stops at the individual container. And even the ‘pods’ concept introduced by Kubernetes and picked-up by rkt does not address multi-container applications in their entirety. What about when an application’s associated metadata and artifact management requires separate processes outside the context of the application? These problems require custom-built tooling for every solution, not a sustainable way to manage container-based applications.
Kubernetes provides higher-level constructs beyond pods, and it is what we use in the Red Hat family of projects to describe and orchestrate aggregation of containers into applications. Kubernetes nicely augments the Docker packaging format and allows us to describe a multi-container application in a way that is abstracted from the details of the underlying infrastructure. Red Hat’s OpenShift v3 platform implements this at the full level of feature-exposure as an end-to-end DevOps workflow. However,
Kubern
etes on its own does not provide any transport for these complex application definitions. In addition, installation, removal, and other application management are not addressed by Kubernetes on its own but are deeply needed by users. The same is true for other orchestration projects evolving around the container ecosystem.
So while I can ‘docker pull’ my database, my web frontend, and my load balancer, I have to get my Kubernetes configuration - the helmsman that turns this collection of components into an orchestrated application - through a different method. Today, there is no standard, clean model for aggregation of pre-defined building blocks. This means that I will likely end up copy-and-pasting examples into my own set of application definitions.
This might not be a major issue in an integrated DevOps model using a solution like Red Hat’s OpenShift Enterprise: an end-to-end life cycle and a library of application building blocks will support me composing my applications. But that model as such does not generically support the idea of standard software components delivered from an external software vendor or the handover to an enterprise ops environment. The logical next step in the evolution of containerization is to expand the concept of portability to cover the full application.
So what if there was a way to simply package the higher level definition and distribute it through the same mechanisms already defined for the individual component containers? Perhaps even to manage the inevitable interactions with the person deploying the application or the management systems? Standard software distributed in a frozen binary format still needs to be parameterized after all - usually beyond what environment variables reasonably can provide.
This is where the Nulecule [pronounced: /noo-lee-kyool/] spec and it’s first implementation, the Atomic App tool, come in:
Nulecule defines a pattern for packaging multi-container applications with all their dependencies and orchestration metadata in a container image. This enables the in-band transport of this application-level information using the same transport mechanism used for the component containers. It also defines an interaction model allowing parameter management of standard software for deployment as well as the aggregation of multiple complex container-based applications into higher-level applications. ‘Application’ after all is a relative term. Nulecule in itself is agnostic to the container and orchestration mechanisms used.
The Atomic App is an implementation of that spec for the Red Hat product universe using Docker and Kubernetes to implement the packaging format, transport, application description, and orchestration interface.
To illustrate the practical use case: an Atomic App allows a pre-packaged complex, multi-container application to be distributed out of a docker registry and deployed with a single command. - As simple as issuing:
# atomic run MYAPP
Here’s a demo of how it works in the current upstream community project:
This will also work on Atomic Host. To try this on regular RHEL server make sure to install the atomic tool, docker and kubernetes from the extras content-set.
This will launch the wordpress.atomicapp container, take configuration parameters as an input, determine the capabilities of the environment, and deploy a running instance of wordpress with a mariadb backend in a separate container, orchestrated by kubernetes. The directed graph and layered inheritance defined in the Nulecule specification allow a composite, container-based application to pull layers as needed, in the right order, and deployed on the matching providers.
The Atomic App tool supports a concept of providers, currently offering enablement for pure docker, Kubernetes and OpenShift v3.
Think the MSI installer concept married to containerization. It’s the generic packaging of standardized applications for deployment into orchestrated platforms. It's right now evolving fast. Red Hat, our partners, and the community are advancing this concept to benefit ISV vendors, enterprise organizations, service providers, systems integrators and other bastions of enterprise-grade open source software. The Nulecule spec and the atomic-app implementation are orthogonal to projects like Docker, Kubernetes, rkt and the appc specification, and we invite others to collaborate and contribute.
Find the full announcement and information on how to dig deeper and engage at Project Atomic.
About the author
Daniel Riek is responsible for driving the technology strategy and facilitating the adoption of Analytics, Machine Learning, and Artificial Intelligence across Red Hat. Focus areas are OpenShift / Kubernetes as a platform for AI, application of AI development and quality process, AI enhanced Operations, enablement for Intelligent Apps.
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit