Editor's note: This article was written while James Brigman was a member of the Red Hat Accelerator program.
Here at the dawn of the new decade (or, one year from now if you prefer to count from 2021), almost everyone owns and uses a computer—especially if you count smartphones as computers (which they are). System administrators, being employed in the IT industry, typically have at least one personal system (from which they do things like surf the web, purchase things, or access their online banking). They have other personal systems, whether virtual or bare metal hardware, on which they perform system administration functions for themselves in a safe, private environment entirely under their control.
This situation is why many system admins have excessive amounts of hardware in their homes. You’ll see anything, ranging from a simple virtual machine on their laptop, all the way up to a rack or a half rack full of server-class hardware.
Whether you already have your home lab tricked out to your liking, or you're contemplating building one of your own, we’ll talk about the ins and outs of home system administration labs. It’s my hope that everyone—from beginner to advanced—will find something useful in this multi-part article. We will document actual hardware builds in follow-on postings.
Let’s start with what to consider regarding hardware.
Intel vs. ARM
We live in a lucky, modern time. IT administrators have three CPUs to choose from for building home systems.
The Intel- and AMD-compatible Complex Instruction Set CPU (CISC) processors are always available. Also, with the widespread use and acceptance of the ARM processor, we also have a low-power, low-cost option for building systems at home or work.
The ARM processor can be found in systems like the Raspberry Pi (RPi) and some Arduino boards. These physically small computers are known as microcontrollers because they are usually deployed to perform single, limited functions, and might feature an operating system that only presents an Integrated Development Environment (IDE) to the user.
Where microcontrollers become essential to system administrators is when a sysadmin must be able to network those systems to corporate or engineering switches via their onboard copper Ethernet interfaces. At my place of employment, there are dozens of these small systems in use for engineering functions that connect to the network via wired Ethernet ports. My team occasionally gets questions from engineering departments about these tiny machines, and I have a personal interest and use for them myself for measuring the weather or controlling devices such as 3D printers or Computer Numerical Control (CNC) systems. My employer uses them extensively for testing products.
How old is too old? (32-bit vs. 64-bit)
The terms 32-bit and 64-bit refer to the data path width of the CPU as well as the system bus data path width. The data path width in bits is the foundation for describing the system’s capability. 32-bit systems are usually limited to 4GB of RAM or less, and older 32-bit systems may have a disk limit of a small number of terabytes.
In 2020, the laptop and desktop PC’s you can easily buy typically feature a 64-bit Intel-compatible CPU. If the system has a genuine Intel processor, it usually lists the class of processor as Core i3, Core i5, Core i7, or Core i9. These designations are in addition to the Intel X-series processors. The term "Core i-something" is a way for Intel to brand and rank their own products. "Core" isn’t an industry term, it’s simply a branding exercise by Intel.
One legitimate claim to fame for Linux is that we can put ancient 32-bit hardware back on the job, often for a single purpose or multiple small tasks. I myself have a small, 32-bit set-top computer that I continue to use for a single purpose application that must run on hardware and will never be upgraded to run on a virtual machine. When an operating system is available, a 32-bit system still works fine for single functions, web displays, or low-power, always-on requirements.
The problem is that 32-bit Intel/AMD systems may have limited benefit for reproducing an IT environment at home. While you can run some corporate software on a 32-bit system, you can’t host virtual machines in the vSphere ESXi system on 32-bit systems. Most types of rich data sources won’t run anymore on a 32-bit system. That includes videoconferencing, web-based browsers, online learning, and advanced IT chat tools like Slack.
Of course, I’ve just now lied to you. The world is chock full of mobile phones that can run Slack, play videos, host online learning, and videoconference. (I actually prefer my mobile phone for videoconferencing).
However, most system administrators must be able to manage virtual machines using a hypervisor to control and manage one or more virtual machines. Modern hypervisors like vSphere ESXi typically run on 64-bit systems. Therefore, except for microcontrollers, 64-bit hardware is the new normal. This architecture offers RAM capacities of 8GB (and up 1TB of RAM isn’t unusual in corporate IT environments), and disk capacities of many terabytes. In 2020’s big data environments, it’s not uncommon to find a petabyte of disk space in a datacenter, which is an amount of storage that was once thought to be impossible in practice.
Most importantly, to run hypervisors like ESXi or KVM, you need an Intel processor with Intel VT-x or Intel-64 extensions. For AMD processors, you need AMD-V or AMD64 extensions. So, my fundamental recommendation is to build your home system administration platform on a modern Intel 64-bit CPU with 8GB of RAM or more and at least 1TB of disk space. This setup is a reasonable system for hosting a hypervisor and running more than one virtual machine.
Chipsets, graphics, and network interfaces
For running Hypervisors, you need to worry about hardware compatibility. VMware provides a sophisticated public-facing website for checking compatibility with your hardware. vSphere and ESXi have long been regarded as the most restrictive of the hypervisor platforms, while KVM has a more fundamental requirement on the CPUs, and is less stringent on the chipset, graphics, and network interfaces.
Things to be aware of
I’ve often told friends, family, and associates that the home environment has far more stringent requirements for computer hardware than the datacenter. People don’t live in datacenters, so they are air-conditioned to suit the hardware. A home is where people live who need room temperature, quiet, and (preferably) efficient power usage. So, any system you use in the home will make the humans happier if it is small, quiet, does not create much heat, and uses as little electric power as possible.
The internet router companies really got this formula down. Most internet routers are fanless, silent, and low-power. The computer builds we discuss in this article aren’t monster gaming systems, they perform one-or-more specific system administration functions, so adhering to these standards is a good idea.
Another massive difference in the home environment is that your systems can (and will) eventually lose power. If you don’t have an Uninterrupted Power Supply (UPS) or generator at home, your home system has to be able to tolerate power problems, or the abrupt loss of power, without damage. You want your home system to still boot up correctly after a significant power outage.
However, it’s not uncommon for spinning disk drives to fail after a power outage. This happens because when the platters stop spinning, the drive cools down below its nominal operating temperature. These two changes can generate enough stiction to prevent the drives from spinning and starting back up again after a power outage. (Yes, stiction really is a thing.) The good thing is that solid-state data storage devices, or SSDs, don’t have motors or spinning platters. For them, stiction is not a thing. SSDs are fantastic devices to use for a low-power system. You just have to provide software maintenance for the number of times you read and write to the drive.
So, if you can acquire 64-bit systems that run cool (or even fanless), are quiet, and small, you will see much more acceptance by your family. And, you will find the system more reliable and much easier to be in the same room with. Here’s an example link to the kind of PC that maximizes running cool, quiet, and small. This product isn’t a recommendation, but merely an example of what is possible.
Big cases vs. small cases
For decades, large deskside computer cases have ruled home labs, and even home gaming machines. The ability to house a large form-factor motherboard with huge amounts of RAM, many disk drives, and multiple video cards was desirable, along with being able to properly cool all of that hardware. Now, there are laptops and fanless PCs that can efficiently run many virtual machines. Go for a big case if you have many legacy hard drives you need to use, but if you have a choice of storage, then a big case is not a stringent requirement for building a home lab.
Monitors, mice, keyboards, and consoles
Linux is welcoming to old monitors, mice, and keyboards. You can count on drivers being available for your old 640x480 monitor, your Dvorak keyboard, or your three-button mouse. But, while those components are cheap, new versions bring with them relevant and useful functionality.
Monitors today have resolutions of 1920x1080 and up. Mice have more buttons and scroll wheel functions. Keyboards are programmable and have useful new function keys. A big new monitor, keyboard, or mouse provides an immediate, visible, or tactile upgrade to a computer. This fact justifies spending a little money on these items and is rarely a bad idea. Plus, they make great low-cost holiday and birthday gifts for system administrators.
However, if you have parts, monitors, and keyboards around the house, or have access to older hardware for free, rock on. You may have to put up with lower performance or capacity, but to get the ball rolling, you can’t beat free.
Now, let’s look at what to consider regarding software.
Bare metal vs. a hypervisor or VM
There are fundamental considerations between running software on the bare metal vs. in a hypervisor or virtual machine. Running software on bare metal usually means you’ve got hardware and an operating system only, which means that you accomplish your objective for the system by using only the hardware and the OS. If we describe this setup using a layer paradigm, you get this:
- Application Layer
- Operating System Layer
- Hardware Layer
So, a real-world example of this configuration might be:
- Application Layer: Spreadsheet software
- Operating System Layer: Raspbian OS
- Hardware Layer: Raspberry Pi
Running bare-metal means using older, lower-performing hardware with less memory and disk storage. This setup might work just fine to fill a particular need, especially if high performance isn’t necessary.
Virtualization introduces a hypervisor layer and abstracts the physical machine into a virtual machine. Using the same Raspberry Pi example:
- Application Layer: Spreadsheet software
- Operating System Layer: Raspbian OS
- Virtual Layer: Virtual machine
- Virtualization Layer: Hypervisor
- Hardware Layer: Hardware
Because of this additional complexity, running virtual machines demands faster 64-bit hardware with more RAM and faster disk speed (or SSD) for storage. Although it would be wonderful to be able to run virtual machines with top performance on 32-bit systems, home users typically jump to the higher-spec 64-bit systems with more RAM and more drives.
Docker on Raspberry Pi
Docker and containers are almost the converses of the virtualization model. Using containers, you can run an OS on bare metal. Then, you install and manage your application layer within containers.
You can run Docker on Raspberry Pi, and this makes a reasonably good learning platform for Docker and container concepts. We’ll demonstrate this capability in a later article. Here’s a good reference article for Docker on RPi.
Media server, backend, and frontend
Many system administrators build up either a media server or a back-end media/streaming server. It’s important to note that when you include friends, family, or roommates in this setup, you now have a production server, rather than a sysadmin home lab.
Media servers are often a backend and that requires a frontend that decodes the video or audio for playing. These days, it’s not uncommon to find a $35 Raspberry Pi pulling duty as a frontend because it has a high-quality hardware display and decoding chip onboard. This setup pairs well with the low clock speed ARM CPU by offloading video tasks.
JBOD vs. RAID vs. Unraid
RAID gets hashed and re-hashed all of the time, so I won’t go into all of the RAID levels and such here. Instead, I will list the three options in order of increasing cost:
- JBOD: (Lowest cost) This option can consist of just one spindle or one USB flash memory stick.
- Unraid: (Midrange cost) This option is a software layer over the disk hardware that helps manage the disk.
- RAID: (Highest cost) A Redundant Array of Independent Disks (RAID)—you thought you’d get away without seeing the acronym spelled out?—is typically the most expensive option because it requires a controller that can connect to multiple drives. The need for parity drives ups the cost because that’s disk space you can’t use for storing your own data, functioning as the overhead for being able to fail out one drive and keep working.
It’s not a bad idea to test a server concept with a single spindle, and if you find you need higher performance or reliability, go for UNRAID or RAID.
[Looking for more information on RAID? Check out "RAID for those who avoid it". ]
Upcoming example builds
In later articles, we’ll take a look at the following builds:
I’ll demonstrate builds starting from the lowest possible cost (RPi) on up to commercial-quality systems running RAID hardware. We’ll talk about the pros and cons of each build.
Before I close, I want to be clear: Low-cost has advantages beyond merely saving you money. Low-cost also means disposable, reconfigurable, and even "give away-able." I’ll follow the progression from low-to-high cost and hopefully demonstrate how lower-cost hardware is more flexible and powerful in the home environment than the other options.
Here are references to tide you over: