In our third and final installment (see: part one & part two), let’s take a look at some high-level use cases for Linux containers as well as finally (finally) defending what I like to call “pet” containers. From a general perspective, we see three repeated high-level use cases for containerizing applications:
- The fully orchestrated, multi-container application as you would create in OpenShift via the Red Hat Container Development Kit;
- Loosely orchestrated containers that don't use advanced features like application templates and Kubernetes; and
- Pet containers.
FULLY ORCHESTRATED MULTI-CONTAINER APPLICATION | LOOSELY ORCHESTRATED CONTAINER APPLICATION | PET CONTAINER |
|
|
|
While the best approach to take may be a fully orchestrated one, 10 years of virtualization experience (read: of trial-and-error) teach us that many IT teams will implement a combination of all three. Note that this isn’t a bad thing: the truth is that the model enabled by the Docker project has significant advantages for users, even if you don't use the aggregate packaging part.
A key feature of Linux containers is to provide namespace isolation, enabling you to run content from different generations moving at different speeds without interference in the binary runtime. From Red Hat’s perspective, the goal was (and still is) to use this as a path for structuring the Red Hat Enterprise Linux userspace and addressing the cost of migrating whole software stacks, a drive that started during the inception of the now-venerable Red Hat Enterprise Linux 5.
It is important to note, however, that this isolation does not replace component level packaging - we still create RPMs to build into the container image, although it's now easier to mix this with native upstream packages such as Ruby Gems or NPMs. So what the Docker project does is remove the single dependency on RPM - suddenly, it is viable to use native package formats where applicable or even copy-in files, if you are using it as binary packaging at the stack level.
Consider this from a historic point of view: the basic idea for Red Hat’s Linux container work started as "why can't we run Red Hat Enterprise Linux 3 code in a chroot on Red Hat Enterprise Linux 5 instead of doing this crazy kernel backport?” The answer from engineering back then was "wait until we have namespaces". The emergence of early virtualization helped manage this pain for some time, but people now find that with the growing complexity of the userspace stack, virtualization-only solutions can be rather untenable.
For example, if I want to backup my Red Hat Enterprise Linux 6 instance on KVM, I have to have a Red Hat Enterprise Linux 6 version of my backup-agent. If I want my Unison sync between my Fedora laptop and my Red Hat Enterprise Linux 6 server to work after OCAML changed the serialization format, creating a compatibility dependency on the compiler version used, I have to somehow get a whole library stack mixed between the two operating systems which, in practical terms, is impossible. With a container, either as a hybrid (desktop is not containerized, but Unison is running on a Red Hat Enterprise Linux 6 containers) or fully containerized, these issues can be solved, reducing them to kernel syscall, filesystem layout and sometimes API dependencies.
In many cases, these will be “pet containers” - instances where the respective docker image is simply used as a way to bootstrap an environment and then treat it like a traditional deployment, running yum (or dnf) inside. That is a totally valid use case, with only a minimal threshold between using a pet container and building a container with a build service. There is work to do to actually enable this model, but the difference between pets and using build is quite small, unless you expect to directly operate on the host - which involves a path change (chroot /host).
So why defend the pet container if the other models are actually “better” use cases for container deployments? It’s all about easing barriers to adoption - deploying via a build service, while not particularly taxing, doesn’t necessarily mesh with the workflows of existing enterprises, including Red Hat customers. By embracing the concept of pet containers, we can better enable our customers to move existing applications into containers, effectively providing a Red Hat Enterprise Linux 6 userspace environment on Red Hat Enterprise Linux 7. This can drive adoption without forcing wholesale operational change - a key motivator in encouraging innovation at the customer level and, hopefully, the enterprise IT world at large.
To try this, the no-cost Red Hat Developer Subscription and the Red Hat Container Development Kit are excellent entry points. Questions or feedback? Reach out using the comments section (below).
저자 소개
Daniel Riek is responsible for driving the technology strategy and facilitating the adoption of Analytics, Machine Learning, and Artificial Intelligence across Red Hat. Focus areas are OpenShift / Kubernetes as a platform for AI, application of AI development and quality process, AI enhanced Operations, enablement for Intelligent Apps.
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
오리지널 쇼
엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리
제품
- Red Hat Enterprise Linux
- Red Hat OpenShift Enterprise
- Red Hat Ansible Automation Platform
- 클라우드 서비스
- 모든 제품 보기
툴
체험, 구매 & 영업
커뮤니케이션
Red Hat 소개
Red Hat은 Linux, 클라우드, 컨테이너, 쿠버네티스 등을 포함한 글로벌 엔터프라이즈 오픈소스 솔루션 공급업체입니다. Red Hat은 코어 데이터센터에서 네트워크 엣지에 이르기까지 다양한 플랫폼과 환경에서 기업의 업무 편의성을 높여 주는 강화된 기능의 솔루션을 제공합니다.