You've mastered a Linux tool, but that hard-earned knowledge came at the cost of frequent usage, reading the manual pages, and using a search engine to avoid the bad examples out there.

So what incentive do you have to learn and replace your tools with new utilities? Here are a few reasons:

  1. You want to be more productive to do more in less time, and a different tool can provide that.
  2. A different tool might mimic the way you work. It is nice to use a tool that works just the way you expect.
  3. A new tool challenges how you do things. This is important because as you improve, so do the tools and technology around you. It is good when a utility forces you to think outside the box.

This article offers a few interesting new tools to consider using. When evaluating a new tool, consider the community around it, whether it's easy to use, and if it has the functionality you need.

[ Boost your command line skills. Download A sysadmin's guide to Bash scripting. ]

One last thing: The topic of "replacement tools" is always controversial, so be open-minded and try them. There is nothing wrong with the original tools mentioned in the article; these are just options that might help you work better.

Also, for obvious reasons, this article doesn't cover every available tool. Consider this list as a starting point.

Before starting

Here are some things to keep in mind as you try out these new tools:

  • You should be familiar with Linux's command-line interface (CLI). If you're not, read this article to get started.
  • Some of these utilities may not be on your system and will require elevated privileges to install with tools like RPM.
  • It might be better to install some tools under your user, rather than system-wide, with installers like pip.

OK, it's time to try some new tools.

htop and glances: Better than top

The top utility is one of the best general-purpose resource monitoring tools on Linux. It has nice features like saving stats into a file and sorting columns by criteria.

[ Learn what the first five lines of Linux's top command tell you. ]

In the same spirit, the htop command displays more information (like how hard each CPU core is working). Below is a sample session showing how to filter, sort, and search processes using htop:

What makes this tool stand apart? The user interface gives you access to powerful operations with ease.

To install htop on RPM-based distributions:

$ sudo dnf install -y htop

Glances is another tool that gives you lots of information about your system, much like htop:

Why is there another tool like htop? Well, glances has several features that make it interesting:

  1. It can run in server mode, allowing you to connect to it using a web browser or with a REST client.
  2. It can export results in several formats, including Prometheus.
  3. You can write plugins to extend it in Python.

To install it, you can use a virtual environment or do a user installation:

$ pip install --user glances

smem: When you're focused on memory

Utilities like top, htop, and glances give you a full array of details about your server, but what if you are concerned only about memory utilization? In that case, smem is a great option:

It is possible to filter by user, show totals, group usage by users, and even create plots with Mathlib.

To install smem on Fedora Linux:

$ sudo dnf install -y smem

ripgrep: Faster than grep

The grep utility is probably one of the most well-known filtering tools; if you've ever needed to find files with a filter, chances are you used grep.

[ Happy with the usual option? Download the Linux grep command cheat sheet. ]

A nice replacement for grep is ripgrep. It is fast and has modern features that grep doesn't have:

  1. It can export the output to JSON format. This is a great feature for data capture or interaction with other scripts.
  2. It provides automatic recursive directory searches, skipping hidden files and common ignorable backup files.

Start by comparing a regular recursive grep that only looks inside files with extension *.pyb, using a case-insensitive search:

$ time grep --dereference-recursive --ignore-case --count --exclude '.ipynb_*' --include '*.ipynb'  death COVIDDATA/
COVIDDATA/.ipynb_checkpoints/Curve-checkpoint.ipynb:0
COVIDDATA/.ipynb_checkpoints/EUCDC-checkpoint.ipynb:37
COVIDDATA/.ipynb_checkpoints/Gammamulti-checkpoint.ipynb:11
COVIDDATA/.ipynb_checkpoints/Gammapivot-checkpoint.ipynb:11
# ... Omitted output
COVIDDATA/tweakers/zzcorwav.ipynb:10

real	0m0.613s
user	0m0.505s
sys	0m0.105s

Note that it shows the Jupyter .ipynb_checkpoints/* checkpoint files. Next, see ripgrep (rg) in action:

$ time rg --ignore-case --count --type 'jupyter' death COVIDDATA/
COVIDDATA/tweakers/zzcorwav.ipynb:10
COVIDDATA/tweakers/zzbenford.ipynb:2
COVIDDATA/tweakers/EUCDC.ipynb:19
COVIDDATA/Modelpivot.ipynb:9
COVIDDATA/experiment/zzbenford.ipynb:2
COVIDDATA/experiment/zzcorwavgd.ipynb:10
# ... Omitted output
COVIDDATA/experiment/zzcasemap.ipynb:13

real	0m0.068s
user	0m0.087s
sys	0m0.071s

The command line is shorter, and rg skips the Jupyter checkpoint files without any extra help. Check below to see rg working with a few flags:

Install ripgrep on Fedora Linux using DNF:

$ sudo dnf install ripgrep

drill (ldns): More informative than dig or nslookup

If you need to find the internet protocol (IP) address of a given DNS record, you probably use dig or nslookup. These commands have been around so long that they have entered and left the deprecation state.

A tool that offers the same functionality and is more modern is drill (from the lndns project). Say you want to see the MX (Mail Exchangers) for the nasa.org domain:

$ dig @8.8.8.8 nasa.org MX +noall +answer +nocmd
nasa.org.		3600	IN	MX	5 mail.h-email.net.

The drill command gives you the same information, plus some more:

$ drill @8.8.8.8 mx nasa.org
;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 50948
;; flags: qr rd ra ; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 
;; QUESTION SECTION:
;; nasa.org.	IN	MX

;; ANSWER SECTION:
nasa.org.	3600	IN	MX	5 mail.h-email.net.

;; AUTHORITY SECTION:

;; ADDITIONAL SECTION:

;; Query time: 126 msec
;; SERVER: 8.8.8.8
;; WHEN: Sun Jul 10 14:31:48 2022
;; MSG SIZE  rcvd: 58

What does this mean to you?

  • drill can be used as a drop-in replacement for dig.
  • It is good to have a separate implementation of DNS tools to troubleshoot and diagnose bugs.

Distribution maintainers and application developers have more compelling arguments to use ldns:

Here is a small program that can query the MX records for a given list of domains:

Install ldns on Fedora Linux like this:

$ sudo dnf install -y python3-ldns ldns-utils ldns

Rich-CLI: One CLI to render all formats

Let's face it: It is quite annoying to use different tools to render different data types nicely on the command-line interface (CLI).

For example, here's a JSON file (no special filtering):

$ /bin/jq '.' ./.thunderbird/pximovka.default-default/sessionCheckpoints.json
{
  "profile-after-change": true,
  "final-ui-startup": true,
  "quit-application-granted": true,
  "quit-application": true,
  "profile-change-net-teardown": true,
  "profile-change-teardown": true,
  "profile-before-change": true
}

An XML file:

$ /bin/xmllint ./opencsv-source/checkstyle-suppressions.xml
<?xml version="1.0"?>
<!DOCTYPE suppressions PUBLIC "-//Puppy Crawl//DTD Suppressions 1.0//EN" "http://www.puppycrawl.com/dtds/suppressions_1_0.dtd">
<suppressions>
    <suppress files="." checks="LineLength"/>
    <suppress files="." checks="whitespace"/>
    <suppress files="." checks="HiddenField"/>
    <suppress files="." checks="FinalParameters"/>
    <suppress files="." checks="DesignForExtension"/>
    <suppress files="." checks="JavadocVariable"/>
    <suppress files="." checks="AvoidInlineConditionals"/>
    <suppress files="." checks="AvoidStarImport"/>
    <suppress files="." checks="NewlineAtEndOfFile"/>
    <suppress files="." checks="RegexpSingleline"/>
    <suppress files="." checks="VisibilityModifierCheck"/>
    <suppress files="." checks="MultipleVariableDeclarations"/>
</suppressions>

A markup file? A CSV file? A Python script? You see where this is going; a different application for each type. Some of them offer syntax colorization, and others do not. If you want pagination, you most likely need to pipe the output to less—but then kiss colorization goodbye.

[ Free download: Advanced Linux commands cheat sheet. ] 

Enter Rich-CLI (an application that's part of the Textualize project) to the rescue. Below I revisit the two files I opened before, this time using rich. First, here is the JSON file:

$ rich ./.thunderbird/pximovka.default-default/sessionCheckpoints.json
{
  "profile-after-change": true,
  "final-ui-startup": true,
  "quit-application-granted": true,
  "quit-application": true,
  "profile-change-net-teardown": true,
  "profile-change-teardown": true,
  "profile-before-change": true
}

Next, here is the XML file I demonstrated earlier:

$ rich ./opencsv-source/checkstyle-suppressions.xml
<?xml version="1.0"?>

<!DOCTYPE suppressions PUBLIC "-//Puppy Crawl//DTD Suppressions 1.0//EN"
        "http://www.puppycrawl.com/dtds/suppressions_1_0.dtd">
<suppressions>
    <suppress files="." checks="LineLength"/>
    <suppress files="." checks="whitespace"/>
    <suppress files="." checks="HiddenField"/>
    <suppress files="." checks="FinalParameters"/>
    <suppress files="." checks="DesignForExtension"/>
    <suppress files="." checks="JavadocVariable"/>
    <suppress files="." checks="AvoidInlineConditionals"/>
    <suppress files="." checks="AvoidStarImport"/>
    <suppress files="." checks="NewlineAtEndOfFile"/>
    <suppress files="." checks="RegexpSingleline"/>
    <suppress files="." checks="VisibilityModifierCheck"/>
    <suppress files="." checks="MultipleVariableDeclarations"/>
</suppressions>

See the demo below for rendering multiple file types with a single command:

Installation is trivial with pip:

$ pip install --user rich-cli

Wrap up

You don't need to settle for the default tools that come with the Linux operating system. Many Linux tools offer new functionality that will make you more productive. And if more people use them, they will become the default tools.

Also, when evaluating any tool, look at its community and how often it is updated for bugs and new features. An active community is as important as the tool itself.


저자 소개

Proud dad and husband, software developer and sysadmin. Recreational runner and geek.

UI_Icon-Red_Hat-Close-A-Black-RGB

채널별 검색

automation icon

오토메이션

기술, 팀, 인프라를 위한 IT 자동화 최신 동향

AI icon

인공지능

고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트

open hybrid cloud icon

오픈 하이브리드 클라우드

하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요

security icon

보안

환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보

edge icon

엣지 컴퓨팅

엣지에서의 운영을 단순화하는 플랫폼 업데이트

Infrastructure icon

인프라

세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보

application development icon

애플리케이션

복잡한 애플리케이션에 대한 솔루션 더 보기

Virtualization icon

가상화

온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래