Red Hat Enterprise Virtualization made solid progress during 2010. We delivered Red Hat Enterprise Virtualization 2.2 and the first release of Red Hat Enterprise Virtualization for Desktops. We announced that several enterprise clouds, such as IBM’s, would be built on our virtualization platform. And we announced a string of customer wins. Along with these advances came widespread acknowledgment from the press and analyst communities that Red Hat’s virtualization portfolio was becoming established as a potent force in the market. Now, keeping up the momentum, we’re kicking off 2011 with a pair of leading virtualization performance results.

In this blog, we’ll discuss progress in the area of virtualization benchmarks. Whether we’re reading Consumer Reports or cruising Internet forums, we all consider external tests and reviews when choosing the products we wish to purchase, and virtualization is no exception. In fact, for many customers, performance benchmark results are a required checklist item in their product assessment and purchase process.

In the case of virtualization, the availability of independent benchmark results has been lacking, and is still only improving slowly. In the days when VMware was the only mainstream virtualization vendor, its performance was, not unreasonably, measured by its own VMmark benchmark. However, today, with multiple vendors and products to choose from, a fully open, independent benchmark is needed. That need is starting to be met by the recently introduced SPECvirt benchmark, created through a collaboration of leading virtualization vendors, including Red Hat, Microsoft and VMware, and the Standard Performance Evaluation Corporation (SPEC). It’s currently the only industry-standard benchmark that measures virtualization performance and scalability.

The SPECvirt benchmark measures the ability of a system to host virtual machines that are running a set of typical server applications (web, application, mail, etc.) – it’s modeled to look like a customer’s real environment. As described on the SPEC website, the benchmark “measures the end-to-end performance of all system components including the hardware, virtualization platform and the virtualized guest operating system and application software. The benchmark supports hardware virtualization, operating system virtualization and hardware partitioning schemes.”

The metric for SPECvirt is the SPECvirt_sc2010, which is a composite number derived from the performance of the virtualized applications and a Quality of Service (QoS) requirement. Every published result also shows how many virtual machines were used to achieve the SPECvirt_sc2010 figure. To simplify benchmark setup, six virtual machines, each running a different application, are grouped into a tile. The benchmark run increases the number of tiles deployed until the QoS metric can no longer be met or until the total performance no longer increases.

This relatively sophisticated setup provides a flexible test environment, although the potential for different benchmarks to be running different applications (e.g. different web servers or different application servers) means that reviewers must take care to ensure that product-to-product comparisons are valid. Also, as with any benchmark, it is important to compare the different hardware platforms used – a bigger system should deliver a more impressive result.

At this time, only five SPECvirt results have been published. Four for Red Hat virtualization, based on the KVM hypervisor, and one by VMware, based on ESX 4.1. See here for the full listing of results.

The two most recent world-record results, published by IBM in November 2010, used Red Hat Enterprise Linux 6 virtualization running on IBM xSeries servers:

  • IBM x3850: 64 cores (128 threads), 2TB:
    Result: 5466.58@336 (SPECvirt_sc2010@VMs)
  • IBM x3690: 16 cores (32 threads), 1TB:
    Result: 1763.68@108 (SPECvirt_sc2010@VMs)

These results are delivered on the largest systems yet published – systems on which no other virtualization vendor can run except Red Hat with our KVM technology, which leverages the power of the Linux kernel. The results show how rapidly the performance envelope for virtualization is growing. In comparison, the first result, published in June 2010, based on Red Hat Enterprise Linux 5.5 and a much smaller IBM x3650, achieved less than a quarter of the performance of the latest result. In fact, the benchmark-leading system – the large IBM x3850 – is not yet supported by all the virtualization products on the market today due to its large memory and CPU configuration.

During 2011 we hope to see a greatly increased set of SPECvirt results published, covering systems of varying sizes running a range of different virtual server applications. No doubt the classic game of benchmark leapfrog by the various vendors will occur, resulting in faster development of virtualization technology and improved solutions for customers. Also, two other SPECvirt benchmark criteria have been defined, which measure performance per watt of power consumed by the server and the storage. Although power consumption is a vital consideration for IT departments today, no results have yet been published using these criteria. There is still much work to do to provide customers with comprehensive benchmark results that help them choose the most appropriate virtualization solution for their environment, but SPECvirt is on its way to filling this need.