Red Hat 계정으로 회원 프로필, 기본 설정 및 고객 상태에 따라 다음의 서비스에 액세스할 수 있습니다.
아직 등록하지 않으셨습니까? 등록해야 하는 이유:
- 한 곳에서 기술 자료 문서를 탐색하고, 지원 사례와 서브스크립션을 관리하고, 업데이트를 다운로드 할 수 있습니다.
- 조직 내의 사용자를 보고, 계정 정보, 기본 설정 및 권한을 편집할 수 있습니다.
- Red Hat 자격증을 관리하고 시험 내역을 조회하며 자격증 관련 로고 및 문서를 다운로드할 수 있습니다.
Red Hat 계정으로 회원 프로필, 기본 설정 및 자신의 고객 상태에 따른 기타 서비스에 액세스할 수 있습니다.
보안을 위해, 공용 컴퓨터 사용 중에 Red Hat 서비스 이용이 끝난 경우 로그아웃하는 것을 잊지 마십시오.로그아웃
In a recent blog post on the appc spec, I mentioned Project Atomic’s evolving Nulecule [pronounced: noo-le-kyul] spec as an attempt to move beyond the current limitations of the container model. Let's dig a bit deeper into that.
Containers are great. Docker introduced the concept of portability with aggregate application packaging to the container space. Since then we have been on the path to fundamentally changing how complex software has been developed and distributed over the last 20 to 40 years. This shift of paradigm is just beginning, but already it's impact can not be ignored.
The reason for this success is a problem that has been obvious to many of us for some time: the modern, open source-based application stack has become far too complex to be able to project it onto the traditional monolithic single-instance / single-version user space model of legacy UNIX. This problem has been made worse by the way binary code distribution has been implemented in popular package managers like rpm and dpkg.. Due to the broad set of options developers can choose from when building applications combined with the high rate of change in modern software, the idea that a common binary runtime environment can subject every application to its standards has outlived it’s usefulness.
Aggregate packaging of applications for deployment into containers solves these issues. - Or does it? - This is where we currently encounter the limitations I mentioned:
The Docker packaging format stops at the individual container. And even the ‘pods’ concept introduced by Kubernetes and picked-up by rkt does not address multi-container applications in their entirety. What about when an application’s associated metadata and artifact management requires separate processes outside the context of the application? These problems require custom-built tooling for every solution, not a sustainable way to manage container-based applications.
Kubernetes provides higher-level constructs beyond pods, and it is what we use in the Red Hat family of projects to describe and orchestrate aggregation of containers into applications. Kubernetes nicely augments the Docker packaging format and allows us to describe a multi-container application in a way that is abstracted from the details of the underlying infrastructure. Red Hat’s OpenShift v3 platform implements this at the full level of feature-exposure as an end-to-end DevOps workflow. However,
Kubernetes on its own does not provide any transport for these complex application definitions. In addition, installation, removal, and other application management are not addressed by Kubernetes on its own but are deeply needed by users. The same is true for other orchestration projects evolving around the container ecosystem.
So while I can ‘docker pull’ my database, my web frontend, and my load balancer, I have to get my Kubernetes configuration - the helmsman that turns this collection of components into an orchestrated application - through a different method. Today, there is no standard, clean model for aggregation of pre-defined building blocks. This means that I will likely end up copy-and-pasting examples into my own set of application definitions.
This might not be a major issue in an integrated DevOps model using a solution like Red Hat’s OpenShift Enterprise: an end-to-end life cycle and a library of application building blocks will support me composing my applications. But that model as such does not generically support the idea of standard software components delivered from an external software vendor or the handover to an enterprise ops environment. The logical next step in the evolution of containerization is to expand the concept of portability to cover the full application.
So what if there was a way to simply package the higher level definition and distribute it through the same mechanisms already defined for the individual component containers? Perhaps even to manage the inevitable interactions with the person deploying the application or the management systems? Standard software distributed in a frozen binary format still needs to be parameterized after all - usually beyond what environment variables reasonably can provide.
This is where the Nulecule [pronounced: /noo-lee-kyool/] spec and it’s first implementation, the Atomic App tool, come in:
Nulecule defines a pattern for packaging multi-container applications with all their dependencies and orchestration metadata in a container image. This enables the in-band transport of this application-level information using the same transport mechanism used for the component containers. It also defines an interaction model allowing parameter management of standard software for deployment as well as the aggregation of multiple complex container-based applications into higher-level applications. ‘Application’ after all is a relative term. Nulecule in itself is agnostic to the container and orchestration mechanisms used.
The Atomic App is an implementation of that spec for the Red Hat product universe using Docker and Kubernetes to implement the packaging format, transport, application description, and orchestration interface.
To illustrate the practical use case: an Atomic App allows a pre-packaged complex, multi-container application to be distributed out of a docker registry and deployed with a single command. - As simple as issuing:
# atomic run MYAPP
This will also work on Atomic Host. To try this on regular RHEL server make sure to install the atomic tool, docker and kubernetes from the extras content-set.
This will launch the wordpress.atomicapp container, take configuration parameters as an input, determine the capabilities of the environment, and deploy a running instance of wordpress with a mariadb backend in a separate container, orchestrated by kubernetes. The directed graph and layered inheritance defined in the Nulecule specification allow a composite, container-based application to pull layers as needed, in the right order, and deployed on the matching providers.
The Atomic App tool supports a concept of providers, currently offering enablement for pure docker, Kubernetes and OpenShift v3.
Think the MSI installer concept married to containerization. It’s the generic packaging of standardized applications for deployment into orchestrated platforms. It's right now evolving fast. Red Hat, our partners, and the community are advancing this concept to benefit ISV vendors, enterprise organizations, service providers, systems integrators and other bastions of enterprise-grade open source software. The Nulecule spec and the atomic-app implementation are orthogonal to projects like Docker, Kubernetes, rkt and the appc specification, and we invite others to collaborate and contribute.
Find the full announcement and information on how to dig deeper and engage at Project Atomic.
About the author
Daniel Riek is responsible for driving the technology strategy and facilitating the adoption of Analytics, Machine Learning, and Artificial Intelligence across Red Hat. Focus areas are OpenShift / Kubernetes as a platform for AI, application of AI development and quality process, AI enhanced Operations, enablement for Intelligent Apps.