Jump to section

What is containerization?

Copy URL

Red Hat named a Leader in the 2023 Gartner® Magic Quadrant™

Red Hat was positioned highest for ability to execute and furthest for completeness of vision in the Gartner 2023 Magic Quadrant for Container Management.

Containerization is the packaging together of software code with all it’s necessary components like libraries, frameworks, and other dependencies so that they are isolated in their own "container."

This is so that the software or application within the container can be moved and run consistently in any environment and on any infrastructure, independent of that environment or infrastructure’s operating system. The container acts as a kind of bubble or a computing environment surrounding the application and keeping it independent of its surroundings. It’s basically a fully functional and portable computing environment.

Containers are an alternative to coding on one platform or operating system, which made moving their application difficult since the code might not then be compatible with the new environment. This could result in bugs, errors, and glitches that needed fixing (meaning more time, less productivity, and a lot of frustration).

By packaging up an application in a container that can be moved across platforms and infrastructures, that application can be used wherever you move it because it has everything it needs to run successfully within it.

The idea of process isolation has been around for years, but when Docker introduced Docker Engine in 2013, it set a standard for container use with tools that were easy for developers to use, as well as a universal approach for packaging, which then accelerated the adoption of container technology. Today developers can choose from a selection of containerization platforms and tools—like Podman, Buildah, and Skopeo—that support the Open Container Initiative standards pioneered by Docker.

The "lightweight" or portability characteristic of containers comes from their ability to share the host machine’s operating system kernel, negating the need for a separate operating system for each container and allowing the application to run the same on any infrastructure—bare metal, cloud—even within virtual machines (VMs), as we’ll see in the next section. 

Similarly, developers can use the same tools when working with containers in one host environment as they’d use in another, which makes the development and deployment of containerized apps across operating systems much more simple.

A virtual machine (VM) is a virtual environment that functions as a virtual computer system with its own CPU, memory, network interface, and storage, created on a physical hardware system (located off- or on-premises).

Containerization and virtualization are similar in that they both allow for full isolation of applications so that they can be operational in multiple environments. Where the main differences lie are in size and portability. 

VMs are the larger of the two, typically measured by the gigabyte and containing their own OS, which allows them to perform multiple resource-intensive functions at once. The increased resources available to VMs allows them to abstract, split, duplicate, and emulate entire servers, operating systems, desktops, databases, and networks.

Containers are much smaller, typically measured by the megabyte and not packaging anything bigger than an app and its running environment. 

Where VMs work well with traditional, monolithic IT architecture, containers were made to be compatible with newer and emerging technology like clouds, CI/CD, and DevOps

Containers are often used to package single functions that perform specific tasks—known as a microservice. Microservices are the breaking up of the parts of an application into smaller, more specialized services. This allows developers to focus on working on a specific area of an application, without impacting the app’s overall performance. 

This means the apps stay up and running while updates are made or issues are fixed, allowing for faster improvements, testing, and deployment. 

Microservices and containers work well together, as a microservice within a container has all the portability, compatibility, and scalability of a container. 

Now, how to manage all those containers? That’s where container orchestration comes in. Container orchestration is the automation of the deployment, management, scaling, and networking of containers. 

Kubernetes is an open source container orchestration platform that helps manage distributed, containerized applications at massive scale. You tell Kubernetes where you want your software to run, and the platform takes care of almost everything it takes to deploy and manage your containers.

Kubernetes was originally developed and designed by engineers at Google—one of the early contributors to Linux container technology—before it was donated to the Cloud Native Computing Foundation (CNCF) in 2015. That means that the CNCF is the entity responsible for maintaining the Kubernetes community, while volunteer contributors and administrators are responsible for Kubernetes development, maintenance, and releases.

Red Hat was one of the first companies to work with Google on Kubernetes—even prior to launch—and has since become the second leading contributor to the Kubernetes project.

Red Hat OpenShift is Kubernetes for the enterprise. It’s a software product that includes components of the Kubernetes container management project, but adds productivity and security features that are important to large-scale companies.

"OpenShift" refers to the downstream container orchestration technology derived from the OKD open source project (previously known as OpenShift Origin). "Red Hat OpenShift" refers to the suite of container orchestration products by Red Hat. Red Hat packages a number of preconfigured components alongside OpenShift in various products, including:

Keep reading

Article

What's a Linux container?

A Linux container is a set of processes isolated from the system, running from a distinct image that provides all the files necessary to support the processes.

Article

Containers vs VMs

Linux containers and virtual machines (VMs) are packaged computing environments that combine various IT components and isolate them from the rest of the system.

Article

What is container orchestration?

Container orchestration automates the deployment, management, scaling, and networking of containers.

More about containers

Products

An enterprise application platform with a unified set of tested services for bringing apps to market on your choice of infrastructure.

Resources

Podcast

Command Line Heroes Season 1, Episode 5:
"The Containers Derby"

E-Book

Boost agility with hybrid cloud and containers

Training

Free training course

Running Containers with Red Hat Technical Overview

Free training course

Containers, Kubernetes and Red Hat OpenShift Technical Overview

Free training course

Developing Cloud-Native Applications with Microservices Architectures