Jump to section

Understanding virtualization

Copy URL

New software, from operating systems to applications, constantly demands more. More data, more processing power, more memory. Virtualization makes a single physical machine act like multiple—saving you the cost of more servers and workstations.

Virtualization is technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. Software called a hypervisor connects directly to that hardware and allows you to split 1 system into separate, distinct, and secure environments known as virtual machines (VMs). These VMs rely on the hypervisor’s ability to separate the machine’s resources from the hardware and distribute them appropriately. Virtualization helps you get the most value from previous investments.

The physical hardware, equipped with a hypervisor, is called the host, while the many VMs that use its resources are guests. These guests treat computing resources—like CPU, memory, and storage—as a pool of resources that can easily be relocated. Operators can control virtual instances of CPU, memory, storage, and other resources, so guests receive the resources they need when they need them.

Migrate your virtual infrastructure to Red Hat solutions

Network functions virtualization

Isolated, virtual networks can be created from 1 original network.

Server virtualization

A single server can be made to act like a couple—or hundreds.

Operating system virtualization

1 computer can run multiple different operating systems.

Virtualizing resources lets administrators pool their physical resources, so their hardware can truly be commoditized. So the legacy infrastructure that's expensive to maintain, but supports important apps, can be virtualized for optimal use.

Administrators no longer have to wait for every app to be certified on new hardware; just set up the environment, migrate the VM, and everything works as before. During regression tests, a testbed can be created or copied easily, eliminating the need for dedicated testing hardware or redundant development servers. With the right training and knowledge, these environments can be further optimized to gain greater capabilities and density.

You know that security should be continuous and integrated. Virtualization is an elegant solution to many common security problems. In environments where security policies require systems separated by a firewall, those 2 systems could safely reside on the same physical box. In a development environment, each developer can have their own sandbox, immune from another developer’s rogue or runaway code.

Virtualization management

Virtualization management software is designed to—well—make virtualization more manageable. Sure, you can manually allocate resources into VMs, make space for them on servers, test them, and install patches as needed. But splitting single systems into hundreds means multiplying the work needed to keep those systems running, up to date, and secure.

If all the VMs are tied to a monitoring, provisioning, or management tool, systems can be migrated automatically to better-suited hardware during periods of peak use or maintenance. Imagine a farm of servers that can be retasked in seconds—according to workload and time of day. As a particular guest instance begins consuming more resources, the monitoring system moves that guest to another server with less demand or or allocates more resources to it from a central pool.

It's easy to confuse the 2, particularly because they both revolve around separating resources from hardware to create a useful environment. Virtualization helps create clouds, but that doesn't make it cloud computing. Think about it like this:

  • Virtualization is a technology that separates functions from hardware
  • Cloud computing is more of a solution that relies on that split

The National Institute of Standards and Technology cites 5 features of cloud computing: a network, pooled resources, a user interface, provisioning capabilities, and automatic resource control/allocation. While virtualization creates the network and pooled resources, additional management and operating system software is needed to create a user interface, provision VMs, and control/allocate resources.

Because it’s not just about virtualization. It’s about what virtualization can (or can’t) do to support the technologies that depend on it.

Proprietary virtualization limits access to its source code, which is the key to making your IT infrastructure do what you want it to. These vendors regularly bind users to enterprise license agreements (ELAs) that increase your reliance on that vendor’s software. This can reduce your ability to invest in modern technologies like clouds, containers, and automation systems.

On the other hand, open source virtualization gives users complete control over the infrastructure it creates and everything that relies on it. That means you can modify it to work with (or without) any vendor. And there’s no need for an ELA because there’s no source code to protect. It’s yours.

virtualization vs containers

Virtualization provisions the resources that containers can use. These VMs are environments in which containers can run, but containers aren’t tied to virtual environments. Some software—like Red Hat® OpenShift® Virtualization—can both orchestrate containers and manage virtual machines, but that doesn't mean the 2 technologies are the same.

VMs have finite capabilities because the hypervisors that create them are tied to the finite resources of a physical machine. Containers, on the other hand, share the same operating system kernel and package applications with their runtime environments so the whole thing can be moved, opened, and used across development, testing, and production configurations.

Because you can use more of the hardware you have to run the systems you’re familiar with on one of the world’s most powerful virtualization infrastructures.

We've supported virtualization development for a long time—improving the Kernel-based Virtual Machine (KVM) hypervisor and contributing to KVM and oVirt since both communities were founded. Red Hat also uses Red Hat products internally to achieve faster innovation, and a more agile and responsive operating environment.

The KVM hypervisor is now the core of all major OpenStack® and Linux® virtualization distributions, and it's set records for overall performance and for running the largest quantity of well-performing VMs on a single server.

All this is open source, which means it’s designed for, tested, and certified on all kinds of hardware. We’ve even collaborated with Microsoft , so you can deploy VMs on Red Hat® Enterprise Linux or even manage hundreds of Windows-based VMs using a single virtualization product.

Already have a virtual infrastructure?

Keep reading

Article

Containers vs VMs

Linux containers and virtual machines (VMs) are packaged computing environments that combine various IT components and isolate them from the rest of the system.

Article

What is a virtual machine (VM)?

A virtual machine (VM) is an isolated computing environment created by abstracting resources from a physical machine.

Article

What is KVM?

Kernel-based virtual machines (KVM) are an open source virtualization technology that turns Linux into a hypervisor.

More about virtualization

Products

Modernize existing applications with Red Hat OpenShift Virtualization, included with Red Hat OpenShift Container Platform.

A platform that virtualizes hardware and organizes those resources into clouds.

Resources

Podcast

Command Line Heroes Season 4, Episode 5:
"Smarter phones: Journey to the palm-sized computer"

Report

Global investment bank leverages OpenShift to manage its global footprint of virtual machines

Training

Free training course

Virtualization and Infrastructure Migration Technical Overview