Subscribe to the feed

Automating the tasks we perform is one of the most important parts of our jobs as sysadmins. It's not just about performing those many tasks required to keep the systems we support up and running. It's about making it easy on ourselves and other sysadmins who might stand in for us while we are on vacation or out sick; it's about ensuring that we can perform our jobs quickly and easily with a minimum of work and intervention on our part; it's about—hmmm, should I really say this—about being the lazy sysadmin.

I've written extensively about automation in my books and articles, and my mantra is always, "automate everything." But how do you know where to start?

[ You might also like: 8 tips for reliable Linux system automation ]

The pain point

I started down the road to automation by reducing a major pain point for one of the most important tasks that sysadmins perform—backups. I started with a very small network: one computer and an Internet connection. Backups were easy, although the technology was a series of tape drives that eventually failed.

Initially, I typed in a command on Friday evenings to backup all of my important directories and sometimes checked to verify that the backups were successfully created. They were—mainly—because of tape.

As my network grew and I became responsible for networks other than my own, I found that using the command line to make multiple backups became quite tedious. However, technology advanced and I also discovered that external USB hard drives make an excellent backup medium, and a script makes backing up several computers much easier. Using cron jobs or systemd timers also allows me to schedule backups.

My current backup system uses a Bash script that employs rsync to create backups of up to a dozen computers in my existing home network. The backups are first created on a 4TB internal hard drive and then written to one of a series of external 4TB USB hard drives. I can easily transport the external drives to my safe deposit box for off-site backup. You can read about the details of this backup system in my article, Using rsync to back up your Linux system. The key is to find your most intense pain point and start with that.

My strategy

I really have only one strategy for determining what to automate first—or next. It's to simply determine the task that causes me the most pain at the present moment in time. That pain could be having to spend a lot of time repeatedly typing the same commands, waiting for things to happen before entering the next command, remembering the proper syntax for commands I use frequently, or whatever.

You probably already know the source of the most pain in your sysadmin life. That's the first thing you should consider automating, especially if it's relatively small and not as major or important as a complete, advanced backup system. I started with a straightforward backup system that used tar and some fun features of SSH, which I wrote about in Best Couple of 2015: tar and ssh.

Other pain points for me have been performing Fedora updates, including security and functional fixes as well as feature enhancements. This also includes performing upgrades from one Fedora release to the next, such as from Fedora 32 to Fedora 33.

There are also many options for implementing automation regardless of the task. One part of my strategy has been to start by using scripts to fully understand the solutions and any problems that might be encountered. I'll write a script to solve a problem on one host, copy it to all hosts on the network, and then type in command line Bash programs to perform that task on all the hosts. This takes the form:

for host-name in `cat ~/list-of-hosts` ; do ssh host-name "script-name"; done

But even that becomes a chore and another pain point with enough hosts on enough networks. It can also be problematic when some hosts need to be treated differently from others. I found that more advanced tools such as Ansible can automate tasks over many hosts on a network while treating certain types, such as servers, differently from standard workstations. Ansible doesn't require the distribution of scripts onto each host to perform its work; it doesn't even need to be installed on each host—only on the system used as the "hub."

The PHB pain point

We have all had Pointy-Haired-Bosses (PHBs), and sometimes they are the pain point. Suppose some PHB asks for a list of all RPMs on a particular Linux computer and a short description of each. This happened to me while I worked at the State of North Carolina. Open source was not "approved" for use by state agencies at that time, and I only used Linux on my desktop computer. The PHBs needed a list of each piece of software installed on my system so that they could "approve" an exception.

It took me about five minutes to write a quick script that could be run as many times in the future as they asked me this same question. It listed the RPM packages installed on my host and extracted the description from each package. This script produced a list of over 1,900 packages with a short description of each. I sent this list to the PHB who had requested it and never heard back about it again—ever.

Sometimes the pain point is easily—and quickly—resolved. But the PHBs usually demand immediate attention.

[ Looking for more on system automation? Get started with The Automated Enterprise, a free book from Red Hat. ] 

Final thoughts

I started by creating a simple automation script to address the task that caused me the most pain. I then moved on to the next pain point, and so on. Eventually, those original pain points come back and need to be refined using more advanced tools such as Ansible. This is an iterative process that will never end.


About the author

David Both is an open source software and GNU/Linux advocate, trainer,
writer, and speaker who lives in Raleigh, NC. He is a strong
proponent of and evangelist for the "Linux Philosophy."

David has been in the IT industry for over 50 years. He has taught RHCE
classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State
of North Carolina. He has been working with Linux and open source
software for over 20 years.

David  likes to purchase the components and build his own computers from
scratch to ensure that each new computer meets his exacting
specifications. His primary workstation is an ASUS TUF X299 motherboard
and an Intel i9 CPU with 16 cores (32 CPUs) and 64GB of RAM in a
CoolerMaster MasterFrame 700.

David has written articles for magazines including Linux Magazine and
Linux Journal. His article "Complete Kickstart," co-authored with a
colleague at Cisco, was ranked 9th in the Linux Magazine Top Ten Best
System Administration Articles list for 2008. David currently writes
prolifically for OpenSource.com and Enable Sysadmin.

David currently has five books published with Apress, "The Linux
Philosophy for SysAdmins," a self-study training course in three
volumes "Using and Administering Linux: Zero to SysAdmin," that was
released in late 2019, and "Linux for Small Business Owners" with
co-author Cyndi Bulka.

David can be reached at LinuxGeek46@both.org or on Twitter @LinuxGeek46.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech