Linux® containers and virtual machines (VMs) are packaged computing environments that combine various IT components and isolate them from the rest of the system. Their main differences are in terms of scale and portability.
- Containers are typically measured by the megabyte. They don’t package anything bigger than an app and all the files necessary to run, and are often used to package single functions that perform specific tasks (known as a microservice). The lightweight nature of containers—and their shared operating system (OS)—makes them very easy to move across multiple environments.
- VMs are typically measured by the gigabyte. They usually contain their own OS, allowing them to perform multiple resource-intensive functions at once. The increased resources available to VMs allow them to abstract, split, duplicate, and emulate entire servers, OSs, desktops, databases, and networks.
Beyond the technological differences, comparing containers to VMs is a proxy comparison between emerging IT practices and traditional IT architectures.
Emerging IT practices (cloud-native development, CI/CD, and DevOps) are possible because workloads are broken into the smallest possible serviceable units possible—usually a function or microservice. These small units are best packaged in containers, which allow multiple teams to work on individual parts of an app or service without interrupting or threatening code packaged in other containers.
Traditional IT architectures (monolithic and legacy) keep every aspect of a workload in a single large file type that cannot be split up and so needs to be packaged as a whole unit within a larger environment, often a VM. It was once common to build and run an entire app within a VM, though having all the code and dependencies in one place led to oversized VMs that experienced cascading failures and downtime when pushing updates.
That depends—do you need a small instance of something that can be moved easily (containers), or do you need a semi-permanent allocation of custom IT resources?
The small, lightweight nature of containers allows them to be moved easily across bare metal systems as well as public, private, hybrid, and multicloud environments. They’re also the ideal environment to deploy today’s cloud-native apps, which are collections of microservices designed to provide a consistent development and automated management experience across public, private, hybrid, and multicloud environments. Cloud-native apps help speed up how new apps are built, how existing ones are optimized, how they’re all connected. The caveat is that containers have to be compatible with the underlying OS. Compared to VMs, containers are best used to:
- Build cloud-native apps
- Package microservices
- Instill DevOps or CI/CD practices
- Move scalable IT projects across a diverse IT footprint that shares the same OS
VMs are capable of running far more operations than a single container, which is why they are the traditional way monolothic workloads have been (and are still today) packaged. But that expanded functionality makes VMs far less portable because of their dependence on the OS, application, and libraries. Compared to containers, VMs are best used to:
Software called a hypervisor separates resources from their physical machines so they can be partitioned and dedicated to VMs. When a user issues a VM instruction that requires additional resources from the physical environment, the hypervisor relays the request to the physical system and caches the changes. VMs look and act like physical servers, which can multiply the drawbacks of application dependencies and large OS footprints—a footprint that's mostly not needed to run a single app or microservice.
Containers hold a microservice or app and everything it needs to run. Everything within a container is preserved on something called an image—a code-based file that includes all libraries and dependencies. These files can be thought of as a Linux distribution installation because the image comes with RPM packages, and configuration files. Because containers are so small, there are usually hundreds of them loosely coupled together—which is why container orchestration platforms (like Red Hat OpenShift and Kubernetes) are used to provision and manage them.
Because we’ve supported virtualization and container development for a long time. We’ve been contributing to the Kernel-based Virtual Machine (KVM) and oVirt communities since both were founded, and we’re the second largest contributor to the Docker and Kubernetes codebases. We’re also invested in the future of these 2 technologies. Our involvement in container-native virtualization, KubeVirt, and hyperconverged infrastructure are improving how containers and VMs work together as part of the same IT system.