For teams running virtualization estates of any size today, three pressures are converging at once. Hardware budgets that looked generous in late 2024 have been blindsided by memory costs, turning routine refreshes into financial hurdles. Licensing models have moved in directions most renewal cycles weren't built to absorb. And in many organizations, leadership has moved from "how do we plan the next infrastructure refresh," to "how can we get more out of the infrastructure already in place?" These pressures are not unique to virtualization, but virtualization is where they all show up at once.
Let's take the hardware side first. AI demand has pulled the world's dynamic random-access memory (DRAM) capacity toward high bandwidth memory used on AI accelerators, the hyperscalers have locked long-term supply agreements for most of what's left, and TrendForce has revised its Q1 2026 server DRAM contract forecast to +90% quarter-on-quarter. With DRAM and NAND flash now making up more than half the bill-of-materials cost of a traditional server, OEMs have raised commercial server list prices by 15-20% in late 2025 and early 2026. IDC has characterized the situation as a structural reset rather than a cyclical shortage, with elevated prices expected to persist well into 2027.
Virtualization workloads are memory-dense by definition (every additional virtual machine (VM) brings its own operating system and its own headroom), and the answer for most virt teams over the years has been to scale out: more hosts, more memory, more clusters. With memory pricing now resetting total cost of ownership (TCO) models mid-project, that reflex behavior no longer works the way it used to. The main consideration in 2026 is how much can be recovered from the hardware already in use.
Getting more from the VM estate you already own
Red Hat OpenShift Virtualization is a good fit for this scenario, and for reasons that predate it. It treats VMs as first-class workloads on a platform whose scheduler, kernel-level controls, and autoscalers were designed around the assumption that the platform's job is to pack workloads onto shared infrastructure efficiently, and that assumption fits the current environment better than most platform designs of its era. It's also a feature of the same platform, Red Hat OpenShift, a team uses for containerized applications, serverless functions, AI workloads, and anything else they're building, so the move off a traditional hypervisor doesn't have to be an isolated event with a fixed endpoint. It can be the on-ramp to a single platform that handles whatever else comes next.
Some alternatives layer Kubernetes on top of traditional virtualization, but OpenShift runs VMs and containerized workloads as native peers on the same bare metal, sharing the same scheduler, the same nodes, and the same management plane. That distinction matters when the goal is density, because it removes the parallel-infrastructure cost of running two stacks side by side, and it removes the friction of moving an application between them when an application team is ready.
The skills question comes up in almost every conversation with teams considering a move, built on a real anxiety underneath it: virtualization administrators have spent years building deep expertise in tools and workflows they trust, and they want to know whether that expertise still has a place in what comes next. The short answer is yes. OpenShift Virtualization includes a dedicated Virtualization perspective in the OpenShift web console, alongside Red Hat OpenShift Lightspeed, a generative AI virtual assistant that answers OpenShift questions and provides step-by-step guidance in natural language. Also, concepts like hosts and guests, resource pools, overcommit, live migration, CPU pinning, and non-uniform memory access (NUMA) placement all have a corresponding capability in OpenShift Virtualization. The mental model the admin uses every day remains intact, and Kubernetes concepts are additive to existing skills, not a replacement for them.
Efficiency that's built in
Let's break the efficiency story down into three layers. Each one does a different job, so administrators can match the right mechanism to each workload rather than turning on a single cluster-wide toggle. They're worth taking deliberately, in order:
- Right-size the VMs first. The platform needs an accurate picture of the resources each VM actually requires. Most VM fleets are over-provisioned because administrators tend to set few resource reservations. Used with Red Hat Advanced Cluster Management observability, OpenShift Virtualization can surface right-sizing recommendations for VMs based on observed CPU and memory usage over time. Think of this as the VM-oriented equivalent of right-sizing guidance for container workloads: it helps teams identify over-provisioned or under-utilized VM resources and adjust them deliberately. This is low-risk recovered capacity. Once those recommendations are reviewed and applied, the environment stops carrying resources that workloads do not actually need. For teams facing immediate DRAM pressure, right-sizing is the first lever to reach for.
- Reclaim memory that VMs have been allocated but aren't actively using. Even after a VM has been right-sized, its memory usage will still rise and fall over time. OpenShift Virtualization uses free page reporting, through the virtio-balloon device, to help the guest report unused memory pages back to the host. That gives the platform more flexibility to use available memory efficiently across the environment, without changing the way the application runs. Actual results depend on guest support, driver availability, and workload behavior, but for memory-dense VM estates it is an important second layer of efficiency after right-sizing.
- Apply overcommit where the workload mix supports it. Many VMs do not consume their full CPU or memory allocation all the time, which creates an opportunity to improve density where the workload profile allows it. In Red Hat performance testing with database workloads, OpenShift Virtualization sustained a 25% memory oversubscription scenario, with some tested workloads showing no significant performance drop even when memory pressure increased. The results also showed why overcommit should be applied deliberately: impact varied depending on how each workload used memory. OpenShift Virtualization gives administrators control over when and how to apply memory overcommit, so they can improve utilization without treating every workload the same. For suitable workloads, it can help teams get more from the infrastructure they already have.
Migrate once, modernize on your own terms
For most organizations facing this decision, what's pushing them off a traditional hypervisor is external pressure (renewal cycles, licensing change, hardware constraint), not strategic appetite. And that distinction is an important one, because the investment made today should compound, not need redoing in a few years' time. Migration onto OpenShift Virtualization can deliver platform modernization from the moment workloads land, because VMs move onto a modern, hybrid-ready foundation without requiring application changes first.
Modern operational practices (GitOps for configuration, continuous integration and continuous delivery (CI/CD) for changes, and unified observability for what's running, supported by Red Hat Ansible Automation Platform for day-2 operations) become available to VM workloads on day one (operational modernization). And containerizing applications at your own pace, when business priorities call for it, happens later (application modernization). None of these are forced by the platform. Each becomes available as a choice rather than a project.
A practical sequence for teams under hardware pressure
The steps below assume you already have OpenShift Virtualization in place. For readers not yet on the platform, Red Hat has mature audit and migration tooling to support the move:
- OpenShift Migration Advisor: A no-cost VMware migration-readiness reporting tool
- Virtualization Migration Assessment: A Red Hat Consulting engagement for larger estates, or for any organization without the knowledge or resourcing in-house to perform the migration planning alone
- Migration Factory: For at-scale VM migration execution
- Migration toolkit for virtualization: The toolkit that automates the migration of VMs at scale, providing data integrity during the transition.
Let's pull all of this together into a sequence that a team can act on.
- Start with an honest picture of the existing estate. How over-provisioned are the VMs already running? How much of what each VM has reserved is it actually using? Right-sizing alone is enough to recover material capacity in most environments. The good news is that OpenShift Virtualization runs on a broad range of industry-standard x86 and Arm hardware, drawing on the Red Hat Enterprise Linux certification ecosystem, so the servers already in the datacenter are very likely candidates without an additional procurement cycle.
- Place each workload where it belongs. Some VMs are best left running close to the data they serve, on OpenShift Virtualization on-premises. Others may be better suited to the public cloud, especially when organizations need a faster path to infrastructure without waiting on new datacenter capacity. OpenShift Virtualization is supported across selected public cloud environments and can run on supported OpenShift deployments, including managed OpenShift services available across leading public cloud providers. That helps teams use a consistent operational model across on-premises, edge, and public cloud deployments, giving customers more freedom to place workloads based on business, technical, and operational needs, rather than being constrained by where the platform can run. And with capabilities such as memory overcommit, they can also improve infrastructure utilization and efficiency in environments where every resource has a direct cost.
- Apply the efficiency mechanisms, in order, wherever the workloads land. As we walked through earlier in this piece, OpenShift Virtualization gives teams a layered set of efficiency capabilities, including free page reporting for memory reclamation and configurable overcommit for workload mixes that support it. Used with Red Hat Advanced Cluster Management for Virtualization observability, it can also surface fleet-level right-sizing recommendations based on observed VM utilization. Apply these mechanisms in sequence: Right-size first, reclaim next, and overcommit where the workload mix supports it. Together, they give you a practical path to recover capacity from the infrastructure you already have, based on the subscriptions and deployment model you're using.
- Modernize the applications that are ready, and only those. Containerizing applications that are ready for it bends the density curve further, because of the per-VM operating-system overhead alone. For applications that aren't ready, leave them as VMs on OpenShift Virtualization and pick them up when the application team has the capacity to engage. The platform doesn't force a choice between the two.
For most teams, that sequence buys back enough capacity to keep moving without an emergency procurement cycle.
Doing more with what you already have
For most teams, the procurement reality of the last 18 months has changed how they think about capacity. Server hardware has never been cheap, but for years it was at least predictable: procurement cycles followed a known shape, lead times were measurable in weeks rather than months, and the DRAM line on a server quote stayed broadly stable from one refresh to the next. That picture has shifted. The buying patterns that made sense when supply was steady (the parallel VM stack maintained alongside the container stack, the generous headroom kept as a hedge, the under-utilized clusters carried as a reliability buffer) are harder to defend when each unit of additional capacity is taking longer to arrive and costing more once it does.
Now, the good news is that the response to this isn't more hardware. There is real capacity sitting in the servers you already own, and the efficiency capabilities of Red Hat OpenShift Virtualization are built to help recover it: the scheduler, memory-efficiency mechanisms, placement controls, and overcommit capabilities. What's missing in most cases isn't the technology. It's the intent to use it.
If you'd value an experienced eye on it, the Red Hat Consulting team has worked alongside many leading global organizations on exactly this kind of optimization. Speak to your Red Hat account team to find out more.
And if you're ready to go deeper on the specific mechanisms (scheduler profiles, overcommit configuration, autoscaling behavior, and the operational tradeoffs that come with each), my colleague Andrew Sullivan has written a more technically detailed companion to this post that walks through the how rather than the why: The Hardware Shortage Is Not Your Capacity Problem.
And finally, there's a companion checklist that summarizes the practical sequence above on a single page - useful for teams who want a quick reference to take into the next infrastructure-planning conversation.
Product trial
Red Hat OpenShift Virtualization Engine | Product Trial
About the author
Simon is a passionate technologist, with over 25 years of experience working in the enterprise IT and cloud technologies space. Simon’s career trajectory has seen him working with a multitude of transformative technologies within the cloud and enterprise computing space, allowing him to stay at the forefront of industry trends.
Beyond his professional achievements, Simon is an advocate for technology's role in driving business innovation and efficiency. Simon's contribution to the field of enterprise IT and cloud technologies is not just through his work at Red Hat OpenShift but also through his active participation in various IT community forums, publications, and events.
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Virtualization
The future of enterprise virtualization for your workloads on-premise or across clouds