New software, from operating systems to applications, constantly demands more. More data, more processing power, more memory. Virtualization makes a single physical machine act like multiple—saving you the cost of more servers and workstations.
Virtualization is technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. Software called a hypervisor connects directly to that hardware and allows you to split 1 system into separate, distinct, and secure environments known as virtual machines (VMs). These VMs rely on the hypervisor’s ability to separate the machine’s resources from the hardware and distribute them appropriately. Virtualization helps you get the most value from previous investments.
The physical hardware, equipped with a hypervisor, is called the host, while the many VMs that use its resources are guests. These guests treat computing resources—like CPU, memory, and storage—as a pool of resources that can easily be relocated. Operators can control virtual instances of CPU, memory, storage, and other resources, so guests receive the resources they need when they need them.
Migrate your virtual infrastructure to Red Hat solutions
Your virtual infrastructure shouldn't limit what apps and services you use—it should enable them. Migrating to Red Hat solutions can reduce your infrastructure spending and give you more opportunities to invest in clouds, containers, and automation.
What can you do with virtualization?
What are the benefits of virtualization?
Virtualizing resources lets administrators pool their physical resources, so their hardware can truly be commoditized. So the legacy infrastructure that's expensive to maintain, but supports important apps, can be virtualized for optimal use.
Administrators no longer have to wait for every app to be certified on new hardware; just set up the environment, migrate the VM, and everything works as before. During regression tests, a testbed can be created or copied easily, eliminating the need for dedicated testing hardware or redundant development servers. With the right training and knowledge, these environments can be further optimized to gain greater capabilities and density.
How secure is virtualization?
You know that security should be continuous and integrated. Virtualization is an elegant solution to many common security problems. In environments where security policies require systems separated by a firewall, those 2 systems could safely reside on the same physical box. In a development environment, each developer can have their own sandbox, immune from another developer’s rogue or runaway code.
How are virtual machines managed?
Virtualization management software is designed to—well—make virtualization more manageable. Sure, you can manually allocate resources into VMs, make space for them on servers, test them, and install patches as needed. But splitting single systems into hundreds means multiplying the work needed to keep those systems running, up to date, and secure.
If all the VMs are tied to a monitoring, provisioning, or management tool, systems can be migrated automatically to better-suited hardware during periods of peak use or maintenance. Imagine a farm of servers that can be retasked in seconds—according to workload and time of day. As a particular guest instance begins consuming more resources, the monitoring system moves that guest to another server with less demand or or allocates more resources to it from a central pool.
It's easy to confuse the 2, particularly because they both revolve around separating resources from hardware to create a useful environment. Virtualization helps create clouds, but that doesn't make it cloud computing. Think about it like this:
- Virtualization is a technology that separates functions from hardware
- Cloud computing is more of a solution that relies on that split
The National Institute of Standards and Technology cites 5 features of cloud computing: a network, pooled resources, a user interface, provisioning capabilities, and automatic resource control/allocation. While virtualization creates the network and pooled resources, additional management and operating system software is needed to create a user interface, provision VMs, and control/allocate resources.
Because it’s not just about virtualization. It’s about what virtualization can (or can’t) do to support the technologies that depend on it.
Proprietary virtualization limits access to its source code, which is the key to making your IT infrastructure do what you want it to. These vendors regularly bind users to enterprise license agreements (ELAs) that increase your reliance on that vendor’s software. This can reduce your ability to invest in modern technologies like clouds, containers, and automation systems.
On the other hand, open source virtualization gives users complete control over the infrastructure it creates and everything that relies on it. That means you can modify it to work with (or without) any vendor. And there’s no need for an ELA because there’s no source code to protect. It’s yours.
Virtualization provisions the resources that containers can use. These VMs are environments in which containers can run, but containers aren’t tied to virtual environments. Some software—like Red Hat® OpenShift® Virtualization, featured in this Red Hat Summit 2020 track as a breakout session—can both orchestrate containers and manage virtual machines, but that doesn't mean the 2 technologies are the same.
VMs have finite capabilities because the hypervisors that create them are tied to the finite resources of a physical machine. Containers, on the other hand, share the same operating system kernel and package applications with their runtime environments so the whole thing can be moved, opened, and used across development, testing, and production configurations.
Why choose Red Hat?
Because you can use more of the hardware you have to run the systems you’re familiar with on one of the world’s most powerful virtualization infrastructures.
We've supported virtualization development for a long time—improving the Kernel-based Virtual Machine (KVM) hypervisor and contributing to KVM and oVirt since both communities were founded. Red Hat also uses Red Hat products internally to achieve faster innovation, and a more agile and responsive operating environment.
The KVM hypervisor is now the core of all major OpenStack® and Linux® virtualization distributions, and it's set records for overall performance and for running the largest quantity of well-performing VMs on a single server.
All this is open source, which means it’s designed for, tested, and certified on all kinds of hardware. We’ve even collaborated with Microsoft , so you can deploy VMs on Red Hat® Enterprise Linux or even manage hundreds of Windows-based VMs using a single virtualization product.
Already have a virtual infrastructure?
If that infrastructure depends on enterprise-license agreements (ELA) and source code you can’t reach, then it isn’t one built to handle an era of disruption. ELAs can limit how much funding is available to invest in cloud, container, and automation technologies, while proprietary code can reduce what could have been innovative developments to mere workarounds.
Keep exploring virtualization
All the ways you can start using virtualization
This is all you need. Really. Install it on anything—from bare-metal hardware to open source or proprietary systems—and start deploying virtual machines by the dozens or hundreds with a hypervisor that can handle it and a management platform that makes it easy.
Deploy storage and virtualization together, even when resources are limited. Use the same server hardware as both hypervisor and controller, so you have a clustered pool of integrated compute and storage resources.