While containers are the prevalent way to deploy applications in today's modern enterprise, the virtual machines (VMs) they run on are still at the foundation of most computing environments. The ability to automatically create VMs on demand and then destroy them when no longer needed provides cost benefits and operational efficiency that you can't ignore.
But, for architects working at an enterprise scale, automating VM creation and provisioning can't be done in an improvisational manner. Instead, designing a reliable creation-and-provisioning automation process for VMs requires some experimentation that leads to production-ready implementations. Few architects have gotten it right the first time; it takes some trial and error to get to a good process.
[ You might also be interested in reading Containers vs. virtual machines: Why you don't always have to choose. ]
But there's a rub.
Production-grade servers for virtualization cost money—a lot of money. According to Paul Teich, director of product management at Equinix Metal:
"If you want really big hyper-converged infrastructure to host a lot of rich VMs containing lots of cores, lots of memory, and gobs of local storage in a 2U form factor, it's not hard to spend $60,000-$80,000 per server."
Obviously, this is not the type of hardware anyone has hanging around for experimentation work, nor should it be. But as experimentation is necessary when designing automation processes for VM creation and provisioning, where do you do it?
One way is to use the virtualization services offered by the popular cloud providers. They all have APIs that you can use to spin up and provision VMs on demand. This approach is viable, but it can get complicated, especially when working within the specific constraints of each cloud provider. Another way is to use a desktop computer running a desktop hypervisor such as VirtualBox or Kernel-based Virtual Machine (KVM).
The case for a desktop virtualization lab
I'm a fan of using a desktop virtualization lab when I'm in the experimentation phase of creating my VM provisioning process, particularly when writing Ansible playbooks to install artifacts and applications on a VM once it's created. I like the privacy, and I also like the ease of use.
My practice is not to use my main workstation computer. There are just too many things that can go haywire to put my main machine at risk. Instead, I use a relatively inexpensive computer as my virtualization host and do my VM creation and provisioning against that machine. If anything goes wrong that affects this host computer, I simply do a complete reinstall of the host operating system and then reinstall the hypervisor and the automation engine.
Creating a multiphase design approach
Once I've created an automation process that works locally, I'll escalate the process onto a cloud provider. I consider executing my automation process against the cloud provider to be the staging phase before deploying my process into a production environment.
I tend to spend most of my design effort using the desktop hypervisor. That's where all the trial-and-error work takes place. The time I spend in staging is minimal. I might have to write a particular set of automation instructions for the staging environment to create the VM and add it to the network. Still, the provisioning scripts that install the artifacts and applications on the VM tend to be very transportable.
[ A free guide from Red Hat: The automation architect's handbook. ]
The critical thing to understand regarding designing a VM creation-and-provisioning process for an architecture I'm working on is that I do the detailed design work locally. Once that's completed, I escalate up to a production-like staging area to ensure the process can scale appropriately. This staging area can be either an enterprise-level private environment or a cloud provider. Then, I work with site-reliability personnel and production-level sysadmins to get the process into the real world.
A case in point
My technique for designing and implementing a process for creating and provisioning VMs at scale is not the only one you can use, but it's one that's worked well for me over the years. Thus, in the spirit of sharing and caring, as an added bonus, I've created an automation process you can use to see if the desktop aspect of my technique works for you. You can find the code for the process in my GitHub repository. The figure below illustrates how the automation process works for creating and provisioning the VMs.
My technique for using a local desktop virtualization lab requires that you have VirtualBox, Vagrant, and Ansible installed on the local computer you intend to use. As you can see in the illustration above, I use Vagrant to create the VMs that I plan to provision. Then, I use Ansible to provision applications and artifacts on each of the VMs. The README file in my GitHub project describes the details for getting it all up and running.
Putting it all together
While using containers as the primary way to segment and deploy applications is a popular approach these days, virtualization automation is still a necessary part of modern enterprise systems. In most cases, to run containers, you still need VMs. This means making automated VM creation and provisioning part of the overall architectural design process for the enterprise architect.
Virtualization automation design work cannot be improvisational. There are too many moving parts in play. Things can and will go wrong unless the approach to the automation design process includes a trial-and-error experimentation phase. An important component of this phase is having a desktop virtualization lab on hand for conducting those experiments. Using the techniques in this article and the automation scripts in the accompanying GitHub project will help get your desktop virtualization lab up in no time.
[ For more cloud computing resources, see An architect's guide to multicloud infrastructure. ]