Welcome to Red Hat

See what's happening near you

Learn what's happening for Red Hat customers around the world:


Understanding virtualization

New software, from operating systems to applications, constantly demands more. More data, more processing power, more memory. Virtualization makes a single physical machine act like multiple—saving you the cost of more servers and workstations.

What is virtualization?

Virtualization is technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. Software called a hypervisor connects directly to that hardware and allows you to split 1 system into separate, distinct, and secure environments known as virtual machines (VMs). These VMs rely on the hypervisor’s ability to separate the machine’s resources from the hardware and distribute them appropriately.

The original, physical machine equipped with the hypervisor is called the host, while the many VMs that use its resources are called guests. These guests treat computing resources—like CPU, memory, and storage—as a hangar of resources that can easily be relocated. Operators can control virtual instances of CPU, memory, storage, and other resources, so guests receive the resources they need when they need them.

Ideally, all related VMs are managed through a single web-based virtualization management console, which speeds things up. Virtualization lets you dictate how much processing power, storage, and memory to give VMs, and environments are better protected since VMs are separated from their supporting hardware and each other.

Simply put, virtualization creates the environments and resources you need from underused hardware.

What can you do with virtualization?

Server virtualization

A single server can be made to act like a couple—or hundreds.

Operating system virtualization

1 computer can run multiple different operating systems.

Network functions virtualization

Isolated, virtual networks can be created from 1 original network.

What are the benefits of virtualization?

Virtualized resources allow administrators to ignore their physical installation. The hardware in the data center becomes truly commoditized: Upgrades can occur seamlessly, without the VM or application realizing that its host machine has been changed. And downtime can be reduced dramatically.

Administrators no longer have to wait for every application to be certified on new hardware; just migrate the VM and everything works as before. During regression tests, a testbed can be created or copied easily, eliminating the need for dedicated testing hardware or redundant development servers.

In security, virtualization is an elegant solution to many common problems. In environments where security policies require systems separated by a firewall, those 2 systems could safely reside on the same physical box. In a development environment, each developer can have their own sandbox, immune from another developer’s rogue or runaway code.

Virtualization’s effect on efficiency and cost

In this study, Forrester Consulting interviews a Red Hat Virtualization customer that experienced an ROI of 103% and a payback period of 5.6 months.

How are virtual machines managed?

Virtualization management software is designed to—well—make virtualization more manageable. Sure, you can manually allocate resources into VMs, make space for them on servers, test them, and install patches as needed. But splitting single systems into hundreds means multiplying the work needed to keep those systems running, up to date, and secure. Operations teams may not get custom VMs fast enough while IT departments are handcuffed by all those manual processes.

If all the VMs are tied to a system monitoring, provisioning, or management tool, systems can be migrated automatically to better-suited hardware during periods of peak use or maintenance. Imagine a farm of servers that can be retasked in seconds—according to workload and time of day. As a particular guest instance begins consuming more resources, the monitoring system moves that guest to another server with less demand or or allocates more resources to it from a central pool.

Each virtualization management system is unique, but most feature an uncomplicated user interface, streamline the VM creation process, monitor the virtual environment, allocate resources, compile reports, and automatically enforce rules. There are even virtualization management solutions that integrate across hardware and software brands—allowing users to install the management system that’s best for them.

What's the difference between virtualization and cloud computing?

It's easy to confuse the 2, particularly because they both revolve around separating resources from hardware to create a useful environment. Think about it like this:

  • Virtualization is a technology that separates functions from hardware
  • Cloud computing is more of a solution that relies on that split

Virtualization helps create clouds, but that doesn’t make it cloud computing.

The National Institute of Standards and Technology cites 5 features of cloud computing: a network, pooled resources, a user interface, provisioning capabilities, and automatic resource control/allocation. While virtualization creates the network and pooled resources, additional management and operating system software is needed to create a user interface, provision VMs, and control/allocate resources.

Aren't VMs just containers?

Virtualization provisions the resources that containers can use. These VMs are environments in which containers can run, but containers aren’t tied to virtual environments.

VMs have finite capabilities because the hypervisors that create them are tied to the finite resources of a physical machine. Containers, on the other hand, share the same operating system kernel and package applications with their runtime environments so the whole thing can be moved, opened, and used across development, testing, and production configurations.

Why choose Red Hat?

Red Hat has supported virtualization development for a long time. We’ve been improving the Kernel-based Virtual Machine (KVM) hypervisor—part of the Linux® kernel—and contributing to KVM and oVirt since both communities were founded. Now, the KVM hypervisor is the core of all major OpenStack and Linux virtualization distributions and is the base of many Red Hat products. It’s gone on to set records for overall performance and for running the largest quantity of well-performing VMs on a single server.

Because the technology is open source, our virtualization products are designed for, tested, and certified on a broad range of servers and hardware, letting you use the infrastructure you already have. We’ve even collaborated with Microsoft to certify and support various configurations. So whether you create a handful of VMs on Red Hat® Enterprise Linux or are managing hundreds of Windows-based VMs with Red Hat Virtualization, we’ll stand by the subscription with 24/7 technical support, ongoing delivery, expertise, and certification commitments.

Think about it—you can use more of the hardware you have to run the systems you’re familiar with on 1 of the world’s most powerful virtualization infrastructures with the confidence that comes with award-winning worldwide support.

Virtualization’s benefits are generally known throughout the IT world, from reduced overhead costs to a smaller datacenter footprint, but just how well do these characteristics hold up to today’s computing environments? Based on this research, it appears the traditional benefits of virtualization still ring true.

All the pieces you need to start using virtualization

This is all you need. Really. Install it on anything—from bare-metal hardware to open source or proprietary systems—and start deploying virtual machines by the dozens or hundreds with a hypervisor that can handle it and a management platform that makes it easy.

Run your virtualization distributions on an operating system that features military-grade security, 99.999% uptime, and support for business-critical workloads. It’s the operating system our virtualization software was meant to run on.

Solve the problem of having to deploy storage separate from virtualization when resources are a factor by using the same server hardware as both hypervisor and controller. Make the servers you have work as a clustered pool of integrated compute and storage resources—an ideal setup for remote and branch offices.

There's a lot more to do with virtualization