Red Hat is best known for Red Hat Enterprise Linux (RHEL) and for being a leader in driving open source development projects. In many cases, the upstream projects then become Red Hat products that provide enterprise functionality elsewhere in the stack.

In a previous blog post, I detailed how we use Red Hat Single Sign On (SSO) to provide a robust and scalable authentication system for public web properties. Applications, such a Red Hat SSO, can obviously be deployed in a variety of platforms. Red Hat IT selected to adopt a hybrid-cloud deployment model for Red Hat SSO, as the majority of normal traffic for https://sso.redhat.com is serviced out of one of our corporate data centers. SSO and virtually every other application runs on top of Red Hat Virtualization.

Our story with production deployments on top of Red Hat Virtualization dates back to early 2010 when Red Hat IT fully embraced Red Hat Virtualization while it was still in pre-release alpha builds. Red Hat Virtualization has obviously evolved a lot since then, but at its heart, it is still the rock-solid KVM virtualization technology.

Like most enterprises, Red Hat operates several co-located data centers. These deployments range from 5-10 racks to tens of thousands of square feet. In each site, we build at least one Red Hat Virtualization cluster (a Data Center in Red Hat Virtualization parlance). This consists of RHEL hypervisors and a storage array. More information on our deployment may be found on the customer portal. Of course, version numbers have changed as now we are running on a combination of Red Hat Virtualization 3.6 and Red Hat Virtualization 4.0.

In our main production sites, we have two distinct Red Hat Virtualization data centers and management nodes. These utilize disparate compute fabrics and storage arrays. Each application has highly available components split across the two with a load balancer managing traffic in either an active/active or an active/passive configuration, depending upon the application. In the case of Red Hat SSO, the MariaDB database primary lives on one side and the hot standby lives on the other side. We are working on converting this to an active/active configuration using Galera. The actual Red Hat SSO application nodes are deployed active/active, with two nodes on each Red Hat Virtualization data center.  Traffic is load balanced between all the RH SSO nodes and they maintain session state using infinispan clustering.

Traffic coming into https://sso.redhat.com is serviced by the CDN and ultimately serviced by the application nodes hosted on Red Hat Virtualization or by the RH SSO clusters we have in AWS.  For the most part, traffic is served out of our data centers as those nodes are much more performant. We have found that Red Hat Virtualization VMs typically perform at a rate commensurate with bare metal systems.

Benefits of Single Sign On Virtualization

The benefits of Red Hat Virtualization for RH SSO far exceed just VM performance.  We are able to rapidly provision and deprovision VMs thanks to the Red Hat Virtualization Command Line Interface(CLI) and API.  In fact, in a previous role, I was able to build an entire new colocation in half a day thanks to “a for loop”, the Red Hat Virtualization CLI and having our applications completely puppetized.

The stability and performance of the KVM technology, combined with the management functionality and security of Red Hat Virtualization allow IT departments, such as Red Hat IT, to realize the promise of rapid delivery and infrastructure while consuming existing hardware assets. Moreover, https://sso.redhat.com leverages a complete and open Red Hat stack, demonstrating Red Hat’s commitment to open source enterprise technology:

To experience how your organization can quickly scale to meet your unique business demands, I invite you to download a fully supported 60-day trial of Red Hat Virtualization here.