As a system administrator, shells are a part of daily operations. Shells often provide more options and flexibility than a graphical user interface (GUI). Daily repetitive tasks can easily be automated by scripts, or tasks can be scheduled to run at certain times during the day. A shell provides a convenient way to interact with the system and enables you to do more in less time. There are many different shells, including Bash, zsh, tcsh, and PowerShell.
In this two-part blog post, I share some of the Bash one-liners I use to speed up my work and leave more time to drink coffee. In this initial post, I'll cover history, last arguments, working with files and directories, reading file contents, and Bash functions. In part two, I'll examine shell variables, the find command, file descriptors, and executing operations remotely.
Use the history command
The history
command is a handy one. History
allows me to see what commands I ran on a particular system or arguments were passed to that command. I use history
to re-run commands without having to remember anything.
The record of recent commands is stored by default in ~/.bash_history.
This location can be changed by modifying the HISTFILE shell variable. There are other variables, such as HISTSIZE (lines to store in memory for the current session) and HISTFILESIZE (how many lines to keep in the history file). If you want to know more about history
, see man bash
.
Let's say I run the following command:
$> sudo systemctl status sshd
Bash tells me the sshd service is not running, so the next thing I want to do is start the service. I had checked its status with my previous command. That command was saved in history
, so I can reference it. I simply run:
$> !!:s/status/start/
sudo systemctl start sshd
The above expression has the following content:
- !! - repeat the last command from history
- :s/status/start/ - substitute status with start
The result is that the sshd service is started.
Next, I increase the default HISTSIZE value from 500 to 5000 by using the following command:
$> echo “HISTSIZE=5000” >> ~/.bashrc && source ~/.bashrc
What if I want to display the last three commands in my history? I enter:
$> history 3
1002 ls
1003 tail audit.log
1004 history 3
I run tail
on audit.log
by referring to the history line number. In this case, I use line 1003:
$> !1003
tail audit.log
..
..
Imagine you've copied something from another terminal or your browser and you accidentally paste the copy (which you have in the copy buffer) into the terminal. Those lines will be stored in the history, which here is something you don't want. So that's where unset HISTFILE && exit comes in handy
$> unset HISTFILE && exit
or
$> kill -9 $$
Reference the last argument of the previous command
When I want to list directory contents for different directories, I may change between directories quite often. There is a nice trick you can use to refer to the last argument of the previous command. For example:
$> pwd
/home/username/
$> ls some/very/long/path/to/some/directory
foo-file bar-file baz-file
In the above example, /some/very/long/path/to/some/directory
is the last argument of the previous command.
If I want to cd
(change directory) to that location, I enter something like this:
$> cd $_
$> pwd
/home/username/some/very/long/path/to/some/directory
Now simply use a dash character to go back to where I was:
$> cd -
$> pwd
/home/username/
Work on files and directories
Imagine that I want to create a directory structure and move a bunch of files having different extensions to these directories.
First, I create the directories in one go:
$> mkdir -v dir_{rpm,txt,zip,pdf}
mkdir: created directory 'dir_rpm'
mkdir: created directory 'dir_txt'
mkdir: created directory 'dir_zip'
mkdir: created directory 'dir_pdf'
Next, I move the files based on the file extension to each directory:
$> mv -- *.rpm dir_rpm/
$> mv -- *.pdf dir_pdf/
$> mv -- *.txt dir_txt/
$> mv -- *.zip dir_txt/
The double dash characters --
mean End of Options. This flag prevents files that begin with a dash from being treated as arguments.
Next, I want to replace/move all *.txt files to *.log files, so I enter:
$> for f in ./*.txt; do mv -v ”$file” ”${file%.*}.log”; done
renamed './file10.txt' -> './file10.log'
renamed './file1.txt' -> './file1.log'
renamed './file2.txt' -> './file2.log'
renamed './file3.txt' -> './file3.log'
renamed './file4.txt' -> './file4.log'
Instead of using the for
loop above, I can install the prename
command and accomplish the above goal like this:
$> prename -v 's/.txt/.log/' *.txt
file10.txt -> file10.log
file1.txt -> file1.log
file2.txt -> file2.log
file3.txt -> file3.log
file4.txt -> file4.log
Often, when modifying a configuration file, I make a backup copy of the original one by using a basic copy command. For example:
$> cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth0.back
As you can see, repeating the whole path and appending .back to the file isn't that efficient and probably error-prone. There is a shorter, neater way to do this. Here it comes:
$> cp /etc/sysconfig/network-scripts/ifcfg-eth0{,.back}
You can perform different checks on files or variables. Run help test
for more information.
Use the following command to discover if a file is a symbolic link:
$> [[ -L /path/to/file ]] && echo “File is a symlink”
Here is an issue I ran across recently. I wanted to gunzip/untar a bunch of files in one go. Without thinking, I typed:
$> tar zxvf *.gz
The result was:
tar: openvpn.tar.gz: Not found in archive
tar: Exiting with failure status due to previous errors
The tar files were:
iptables.tar.gz
openvpn.tar.gz
…..
Why didn't it work, and why would ls -l *.gz
work instead? Under the hood, it looks like this:
$> tar zxvf *.gz
Is transformed as follows:
$> tar zxvf iptables.tar.gz openvpn.tar.gz
tar: openvpn.tar.gz: Not found in archive
tar: Exiting with failure status due to previous errors
The tar
command expected to find openvpn.tar.gz within iptables.tar.gz. I solved this with a simple for
loop:
$> for f in ./*.gz; do tar zxvf "$f"; done
iptables.log
openvpn.log
I can even generate random passwords by using Bash! Here's an example:
$> alphanum=( {a..z} {A..Z} {0..9} ); for((i=0;i<=${#alphanum[@]};i++)); do printf '%s' "${alphanum[@]:$((RANDOM%255)):1}"; done; echo
Here is an example that uses OpenSSL:
$> openssl rand -base64 12
JdDcLJEAkbcZfDYQ
Read a file line by line
Assume I have a file with a lot of IP addresses and want to operate on those IP addresses. For example, I want to run dig
to retrieve reverse-DNS information for the IP addresses listed in the file. I also want to skip IP addresses that start with a comment (# or hashtag).
I'll use fileA as an example. Its contents are:
10.10.12.13 some ip in dc1
10.10.12.14 another ip in dc2
#10.10.12.15 not used IP
10.10.12.16 another IP
I could copy and paste each IP address, and then run dig
manually:
$> dig +short -x 10.10.12.13
Or I could do this:
$> while read -r ip _; do [[ $ip == \#* ]] && continue; dig +short -x "$ip"; done < ipfile
What if I want to swap the columns in fileA? For example, I want to put IP addresses in the right-most column so that fileA looks like this:
some ip in dc1 10.10.12.13
another ip in dc2 10.10.12.14
not used IP #10.10.12.15
another IP 10.10.12.16
I run:
$> while read -r ip rest; do printf '%s %s\n' "$rest" "$ip"; done < fileA
Use Bash functions
Functions in Bash are different from those written in Python, C, awk, or other languages. In Bash, a simple function that accepts one argument and prints "Hello world" would look like this:
func() { local arg=”$1”; echo “$arg” ; }
I can call the function like this:
$> func foo
Sometimes a function invokes itself recursively to perform a certain task. For example:
func() { local arg="$@"; echo "$arg"; f "$arg"; }; f foo bar
This recursion will run forever and utilize a lot of resources. In Bash, you can use FUNCNEST to limit recursion. In the following example, I set FUNCNEST=5 to limit the recursion to five.
func() { local arg="$@"; echo "$arg"; FUNCNEST=5; f "$arg"; }; f foo bar
foo bar
foo bar
foo bar
foo bar
foo bar
bash: f: maximum function nesting level exceeded (5)
Use a function to retrieve the most recent or oldest file
Here is a sample function to display the most recent file in a certain directory:
latest_file()
{
local f latest
for f in "${1:-.}"/*
do
[[ $f -nt $latest ]] && latest="$f"
done
printf '%s\n' "$latest"
}
This function displays the oldest file in a certain directory:
oldest_file()
{
local f oldest
for file in "${1:-.}"/*
do
[[ -z $oldest || $f -ot $oldest ]] && oldest="$f"
done
printf '%s\n' "$oldest"
}
These are just a few examples of how to use functions in Bash without invoking other external commands.
I sometimes find myself typing a command over and over with a lot of parameters. One command I often use is kubectl
(Kubernetes CLI). I am tired of running this long command! Here's the original command:
$> kubectl -n my_namespace get pods
or
$> kubectl -n my_namespace get rc,services
This syntax requires me to manually include -n my_namespace
each time I run the command. There is an easier way to do this using a function:
$> kubectl () { command kubectl -n my_namespace ”$@” ; }
Now I can run kubectl
without having to type -n namespace
each time:
$> kubectl get pods
I can apply the same technique to other commands.
Wrap up
These are just a few excellent tricks that exist for Bash. In part two, I will show some more examples, including the use of find and remote execution. I encourage you to practice these tricks to make your command-line administration tasks easier and more accurate.
[ Free online course: Red Hat Enterprise Linux technical overview. ]
저자 소개
Valentin is a system engineer with more than six years of experience in networking, storage, high-performing clusters, and automation.
He is involved in different open source projects like bash, Fedora, Ceph, FreeBSD and is a member of Red Hat Accelerators.
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
오리지널 쇼
엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리
제품
- Red Hat Enterprise Linux
- Red Hat OpenShift Enterprise
- Red Hat Ansible Automation Platform
- 클라우드 서비스
- 모든 제품 보기
툴
체험, 구매 & 영업
커뮤니케이션
Red Hat 소개
Red Hat은 Linux, 클라우드, 컨테이너, 쿠버네티스 등을 포함한 글로벌 엔터프라이즈 오픈소스 솔루션 공급업체입니다. Red Hat은 코어 데이터센터에서 네트워크 엣지에 이르기까지 다양한 플랫폼과 환경에서 기업의 업무 편의성을 높여 주는 강화된 기능의 솔루션을 제공합니다.