Containers

What's a Linux container?

A Linux® container is a set of one or more processes that are isolated from the rest of the system. All the files necessary to run them are provided from a distinct image, meaning that Linux containers are portable and consistent as they move from development, to testing, and finally to production. This makes them much quicker than development pipelines that rely on replicating traditional testing environments.

What is a container

Imagine you’re developing an application. You do your work on a laptop and your environment has a specific configuration. Other developers may have slightly different configurations. The application you’re developing relies on that configuration and is dependent on specific libraries, dependencies, and files. Meanwhile, your business has development and production environments which are standardized with their own configurations and their own sets of supporting files. You want to emulate those environments as much as possible locally, but without all of the overhead of recreating the server environments. So, how do you make your app work across these environments, pass quality assurance, and get your app deployed without massive headaches, rewriting, and break-fixing? The answer: containers.

The container that holds your application has the necessary libraries, dependencies, and and files so that you can move it through production without all of the nasty side effects. In fact, the contents of a container image can be thought of as an installation of a Linux distribution because it comes complete with RPM packages, configuration files, etc. But, container image distribution is a lot easier than installing new copies of operating systems. Crisis averted–everyone’s happy.

That’s a common example, but Linux containers can be applied to problems in many different ways where ultimate portability, configurability, and isolation is needed. The point of Linux containers is to develop faster and meet business needs as they arise. No matter the infrastructure—on-premise, in the cloud, or a hybrid of the two—containers meet the demand. Of course, choosing the right container platform is just as important as the containers themselves.

Isn’t this just virtualization?

Not exactly. Think of them more as complementary of one another. Here’s an easy way to think about the two:

  • Virtualization lets your operating systems (Windows or Linux) run simultaneously on a single hardware system.
  • Containers share the same operating system kernel and isolate the application processes from the rest of the system. For example: ARM Linux systems run ARM Linux containers, x86 Linux systems run x86 Linux containers, x86 Windows systems run x86 Windows containers. Linux containers are extremely portable, but they must be compatible with the underlying system.

virtualization vs containers

What does this mean? For starters, virtualization uses a hypervisor to emulate hardware which allows multiple operating systems to run side by side. This isn’t as lightweight as using containers. When you have finite resources with finite capabilities, you need lightweight apps that can be densely deployed. Linux containers run natively on the operating system, sharing it across all of your containers, so your apps and services stay lightweight and run swiftly in parallel.

Linux containers are another evolutionary leap in how we develop, deploy, and manage applications. Linux container images provide portability and version control, helping ensure that what works on a developer’s laptop also works in production. Compared to virtual machines, a running Linux container is less resource intensive, has a standard interface (start, stop, environment variables, etc.), retains application isolation, and is more easily managed as part of a larger application (multiple containers), Plus, those multi-container applications can be orchestrated across multiple clouds.


A brief history of containers

Evolution of containers download button

While containers did not originate in Linux, in the open source world, the best ideas are borrowed, modified, and improved upon. Containers are no different.

The idea of what we now call container technology first appeared in 2000 as FreeBSD jails, a technology that allows the partitioning of a FreeBSD system into multiple subsystems, or jails. Jails were developed as safe environments that a system administrator could share with multiple users inside or outside of an organization. In a jail, the intent was that processes get created in a modified chrooted environment—where access to the filesystem, networking, and users is virtualized—and could not escape or compromise the entire system. Jails were limited in implementation and methods for escaping the jailed environment were eventually discovered.

But the concept was compelling.

In 2001, an implementation of an isolated environment made its way into Linux, by way of Jacques Gélinas’ VServer project. As Gélinas put it, this was an effort to run “several general purpose Linux server [sic] on a single box with a high degree of Independence and security.” Once this foundation was set for multiple controlled userspaces in Linux, pieces began to fall into place to form what is today’s Linux container.

Containers become practical

Very quickly, more technologies combined to make this isolated approach a reality. Control groups (cgroups) is a kernel feature that controls and limits resource usage for a process or groups of processes. And systemd, an initialization system that sets up the userspace and manages their processes, is used by cgroups to provide greater control over these isolated processes. Both of these technologies, while adding overall control for Linux, were the framework for how environments could be successful in staying separated.

Advancements in kernal namespaces provided the next step for containers. With kernel namespaces everything from process IDs, to network names could be virtualized within the Linux kernel. One of the newer ones, User namespaces, “allow per-namespace mappings of user and group IDs. In the context of containers, this means that users and groups may have privileges for certain operations inside the container without having those privileges outside the container.” The Linux Containers project (LXC) then added some much-needed tools, templates, libraries, and language bindings for these advancements–improving the user experience when using containers. LXC made it easy for users to start containers with a simple command line interface.

Enter Docker

In 2008, Docker came onto the scene (by way of dotCloud) with their eponymous container technology. The docker technology added a lot of new concepts and tools—a simple command line interface for running and building new layered images, a server daemon, a library of pre-built container images, and the concept of a registry server. Combined, these technologies allowed users to quickly build new layered containers and easily share them with others.

Red Hat recognized the power of collaboration within this new ecosystem and used the underlying technology for our OpenShift Container Platform. To allay fears of a single vendor controlling such an important technology, Docker Inc. contributed many of the underlying components to community-led, open source projects (runc is part of the Open Containers Initiative and containerd has been moved to the CNCF).

There are three major standards to ensure interoperability of container technologies—the OCI Image, Distribution, and Runtime specifications. Combined these specifications allow community projects, commercial products, and cloud providers to build interoperable container technologies (think pushing your custom built images into a cloud provider’s registry server - you need that to work). Today Red Hat and Docker, among many others, are members of the Open Container Initiative (OCI)—are enabling an open, industry standardization of container technologies.


What about container security?

Containers are popular, but how safe are they? There are a lot of moving parts to container security—you need to protect the container pipeline and application; the deployment environment(s) and infrastructure, and you need a plan for integrating with enterprise security tools and policies. You need a plan. We can help.


We can help.

Red Hat has a long history of working in the open source community to make technologies–like containers–secure, stable, and reliable. It’s what we do. Then we support those technologies. So if you need help, we’re there.

Red Hat’s technologies take all of the guesswork out of doing containers the right way. Whether it’s getting your development teams on a platform built with containers in mind, running your container infrastructure on a best-in-class operating system, or providing storage solutions for the massive data generated by containers, Red Hat’s solutions have you covered.

There's a lot more to do with containers