Your Red Hat account gives you access to your member profile and preferences, and the following services based on your customer status:
Not registered yet? Here are a few reasons why you should be:
- Browse Knowledgebase articles, manage support cases and subscriptions, download updates, and more from one place.
- View users in your organization, and edit their account information, preferences, and permissions.
- Manage your Red Hat certifications, view exam history, and download certification-related logos and documents.
Your Red Hat account gives you access to your member profile, preferences, and other services depending on your customer status.
For your security, if you're on a public computer and have finished using your Red Hat services, please be sure to log out.Log out
There have been countless advances in technology in the last few years; both in general and at Red Hat. To list just the ones specific to Red Hat could actually boggle the mind. Arguably, some of the biggest advances have come more in the form of “soft” skills. Namely, Red Hat has become really good at listening - not only to our own customers but to our competitors’ customers as well. This is no more apparent than in our approach to applying a self-service catalog to virtualization. Specifically, pairing Red Hat Enterprise Virtualization (RHEV) with CloudForms for the purpose of streamlining and automation of virtual machine provisioning.
Over the last several weeks, I’ve run a 4 part series on my own (personal) blog, “Captain KVM”, that highlights the actual steps of setting up a self-service catalog using RHEV and CloudForms. In this article, I’d like to focus more on why this technology tandem is so important as an option in the modern data center.
In light of so many different platforms to choose from, it may seem odd to still put focus on so-called “traditional virtualization”. However, from the number of RHEV customers, existing and new, it’s not odd at all. In face, it’s very much an effective solution. In other words, don’t confuse “traditional” with “outdated” or “legacy”.
New Technologies and Solid Foundations
Linux containers are a fantastic means of delivering applications. OpenStack is hard to beat when on-demand scale out applications are required. However, traditional virtualization is still the bedrock for the modern data center. Because of this, most containers are run on virtual machines and virtual machines run along side of OpenStack instances. The applications that run on on virtual machines aren’t going anywhere anytime soon, therefore neither are the virtual machines themselves. Traditional virtualization serves as the foundation for the modern data center.
The Big Problem
The problem is that the provisioning of virtual machines, and by proxy their payload applications, is still largely manual and disruptive to overall productivity for far too many customers.
Because of this, most customers that I’ve talked to over the last 2 years have expressed immense frustration with the lack of ability to completely automate their provisioning process when it comes to deploying an application, including the underlying virtual machine. Here are some of the most common complaints that I’ve heard from our customers or our competitors’ customers in regards to provisioning virtual machines:
- Inconsistent processes and procedures for different applications and end users
- Inconsistent means of applying security/compliance and configuration management
- Home grown tools are difficult to maintain, especially if creator(s) leave
- The overall process is partially automated and partially manual, resulting in inconsistent results and application deployment is a separate process
And even for customers that have some form of self-service portal, I’ve heard no shortage of complaints from them:
- Difficult to create or customize workflows for the self-service portal
- Inconsistent flows for different hypervisors, even they are even supported
- Difficult to integrate and/or write to API’s
- Large sums of budget wasted on incomplete integration, broken/inflexible workflows, or consulting services that never seem to finish
Inconsistent workflows and processes breeds frustration and inefficiency at best. Troubleshooting and patching become nightmares. At worst, it allows for security lapses.
A Simpler Solution
I’m advocating not only to streamline the provisioning process, but to streamline the deployment of the underlying infrastructure as well. Let’s focus on the provisioning problem first:
Imagine this workflow instead:
- User logs into a self-service portal.
- User’s credentials and access allows him/her a selection of VM’s (single item) or VM bundles (entire environments) from which to choose.
- User selects a VM and orders it, much like online shopping.
- Behind the scenes, auto-approvals have already determined that the VM selection is within scope, and sent the order to the underlying virtualization platform. At the proper step, IP management and configuration management will be called upon, applications are deployed, schemas are applied, and compliance policies are adjudicated.
- After a period of time, the VM is complete in production. Fully automated.
The administrative work is done up front in regards to setting up the workflows, quotas, chargeback, security/compliance policies, and other items, but then that particular workflow is complete.. Aside from updating policies or templates periodically, the administrators and engineers are effectively hands off from the provisioning process and can move on to other work in the data center.
The level of productivity for the engineers and administrators goes up. Similarly, the level of consistency in VMs and application provisioning goes up. Security and compliance application is more consistent.
The effect is no less profound for end users. In as few as six steps, their requests are in process. Login, click on self-service, click on a selection, enter some basic data, add to shopping cart, and order. The level of frustration goes down and the level of consistency of experience from application to application goes up. Their productivity increases as well, as it’s a simple and convenient process.
Streamlining the deployment of the underlying infrastructure is also achievable. It was mentioned at the beginning of this article that the entire process is described in a technical blog.. Realistically, Red Hat Enterprise Virtualization (RHEV) can be setup in a day or so. CloudForms can be deployed in RHEV in less than a day. The time it takes to configure the workflows depends on their level of complexity, but basic ones can be done in a matter of minutes.
The point is that a fully functioning RHEV and CloudForms environment can be up and running in a week or less. Custom and third party integration takes longer. Do we have consulting services to assist? Yes we do. Will they stay on-site forever? No they won’t--that’s not our business model. We’d rather show your admins and engineers how to do it correctly and then move on.
Hope this helps,
Jon Benedict / Captain KVM
If you’d like to learn more:
Captain KVM (technical blog):