Topic

Understanding virtualization

New software, from operating systems to applications, constantly demands more. More data, more processing power, more memory. Virtualization makes a single physical machine act like multiple—saving you the cost of more servers and workstations.

What is virtualization?

Virtualization is technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. Software called a hypervisor connects directly to that hardware and allows you to split 1 system into separate, distinct, and secure environments known as virtual machines (VMs). These VMs rely on the hypervisor’s ability to separate the machine’s resources from the hardware and distribute them appropriately. In other words, virtualization—when used correctly—helps you get the most value from previous investments.

The original, physical machine equipped with the hypervisor is called the host, while the many VMs that use its resources are called guests. These guests treat computing resources—like CPU, memory, and storage—as a hangar of resources that can easily be relocated. Operators can control virtual instances of CPU, memory, storage, and other resources, so guests receive the resources they need when they need them.

Ideally, all related VMs are managed through a single web-based virtualization management console, which speeds things up. Virtualization lets you dictate how much processing power, storage, and memory to give VMs, and environments are better protected since VMs are separated from their supporting hardware and each other.

Simply put, virtualization creates the environments and resources you need from underused hardware.

What can you do with virtualization?

Data virtualization

Data that’s spread all over can be consolidated into a single source.

Desktop virtualization

Deploy virtual desktops that are nearly identical to locally installed environments.

Network functions virtualization

Isolated, virtual networks can be created from 1 original network.

Server virtualization

A single server can be made to act like a couple—or hundreds.

Operating system virtualization

1 computer can run multiple different operating systems.

What are the benefits of virtualization?

Virtualizing resources allow administrators to ignore their physical installation, meaning hardware can be truly commoditized. So the legacy infrastructure that's expensive to maintain but supports important applications can actually be virtualized for optimal use.

Administrators no longer have to wait for every application to be certified on new hardware; just migrate the VM and everything works as before. During regression tests, a testbed can be created or copied easily, eliminating the need for dedicated testing hardware or redundant development servers.

Virtualization’s effect on efficiency and cost

In this study, Forrester Consulting interviews a Red Hat Virtualization customer that experienced an ROI of 103% and a payback period of 5.6 months.

Virtualization security

Virtualization is an elegant solution to many common security problems. In environments where security policies require systems separated by a firewall, those 2 systems could safely reside on the same physical box. In a development environment, each developer can have their own sandbox, immune from another developer’s rogue or runaway code.

Security icon

How are virtual machines managed?

Virtualization management software is designed to—well—make virtualization more manageable. Sure, you can manually allocate resources into VMs, make space for them on servers, test them, and install patches as needed. But splitting single systems into hundreds means multiplying the work needed to keep those systems running, up to date, and secure.

If all the VMs are tied to a monitoring, provisioning, or management tool, systems can be migrated automatically to better-suited hardware during periods of peak use or maintenance. Imagine a farm of servers that can be retasked in seconds—according to workload and time of day. As a particular guest instance begins consuming more resources, the monitoring system moves that guest to another server with less demand or or allocates more resources to it from a central pool.

What's the difference between virtualization and cloud computing?

It's easy to confuse the 2, particularly because they both revolve around separating resources from hardware to create a useful environment. Virtualization helps create clouds, but that doesn't make it cloud computing. Think about it like this:

  • Virtualization is a technology that separates functions from hardware
  • Cloud computing is more of a solution that relies on that split

The National Institute of Standards and Technology cites 5 features of cloud computing: a network, pooled resources, a user interface, provisioning capabilities, and automatic resource control/allocation. While virtualization creates the network and pooled resources, additional management and operating system software is needed to create a user interface, provision VMs, and control/allocate resources.

Aren't VMs just containers?

Virtualization provisions the resources that containers can use. These VMs are environments in which containers can run, but containers aren’t tied to virtual environments.

VMs have finite capabilities because the hypervisors that create them are tied to the finite resources of a physical machine. Containers, on the other hand, share the same operating system kernel and package applications with their runtime environments so the whole thing can be moved, opened, and used across development, testing, and production configurations.

Why choose Red Hat?

Because you can use more of the hardware you have to run the systems you’re familiar with on one of the world’s most powerful virtualization infrastructures.

We've supported virtualization development for a long time—improving the Kernel-based Virtual Machine (KVM) hypervisor and contributing to KVM and oVirt since both communities were founded. The KVM hypervisor is now the core of all major OpenStack® and Linux® virtualization distributions, and it's set records for overall performance and for running the largest quantity of well-performing VMs on a single server.

All this is open source, which means it’s designed for, tested, and certified on all kinds of hardware. We’ve even collaborated with Microsoft , so you can deploy VMs on Red Hat® Enterprise Linux or even manage hundreds of Windows-based VMs using a single virtualization product.

Virtualization’s benefits are generally known throughout the IT world, from reduced overhead costs to a smaller datacenter footprint, but just how well do these characteristics hold up to today’s computing environments? Based on this research, it appears the traditional benefits of virtualization still ring true.

All the pieces you need to start using virtualization

This is all you need. Really. Install it on anything—from bare-metal hardware to open source or proprietary systems—and start deploying virtual machines by the dozens or hundreds with a hypervisor that can handle it and a management platform that makes it easy.

Run your virtualization distributions on an operating system that features military-grade security, 99.999% uptime, and support for business-critical workloads. It’s the operating system our virtualization software was meant to run on.

Deploy storage and virtualization together, even when resources are limited. Use the same server hardware as both hypervisor and controller, so you have a clustered pool of integrated compute and storage resources.

Virtualize data wherever it is—on-premise, in a warehouse, or in a cloud—and start treating them as as a single source that can be delivered in whatever form you need, wherever you want.

There's a lot more to do with virtualization