If there were one network utility that I wish had been demystified for me as a support engineer, it is the tcpdump
tool. I can't count the number of times that I ran into a situation where I needed to use it for troubleshooting but didn't fully understand it or what options I needed to know. Today, I take a deep dive into the tcpdump
tool—what it is used for and what you need to know. I also walk you through a mock-up of a situation I found myself in previously. Let's jump into it.
What is tcpdump?
The tcpdump
tool was developed in the late 1980s and has been a network troubleshooting staple since that time. It is distributed under a BSD license and is free to download and use. It works on most *nix operating systems and has a ported version for Windows. At the most basic level, tcpdump
is a packet capture tool used to troubleshoot network connectivity issues. It is probably most closely compared to Wireshark. However, it is far lighter and is command-line only (no GUI available to my knowledge).
Installation
Before we begin poking around with the command, let's take a look at installing it. It usually ships with most modern Linux OSs so you probably already have it. You can verify this by running which tcpdump
. If it isn't installed, don't worry—installation is simple. Run the following command:
$ sudo yum install -y tcpdump
Basic usage
Now that we have the tool ready for use, let's look at the most basic functions. To start capturing packets across an interface, we need to see the network interfaces available for capture. To do this, we use:
$ sudo tcpdump -D
Here is a sample from my Red Hat Enterprise Linux machine:
[tcarrigan@server ~]$ sudo tcpdump -D
[sudo] password for tcarrigan:
1.enp0s3 [Up, Running]
2.enp0s8 [Up, Running]
3.lo [Up, Running, Loopback]
4.any (Pseudo-device that captures on all interfaces) [Up, Running]
5.virbr0 [Up]
6.bluetooth-monitor (Bluetooth Linux Monitor) [none]
7.nflog (Linux netfilter log (NFLOG) interface) [none]
8.nfqueue (Linux netfilter queue (NFQUEUE) interface) [none]
9.usbmon0 (All USB buses) [none]
10.usbmon1 (USB bus number 1)
11.virbr0-nic [none]
This command is extremely useful in enterprise environments where specific interfaces are used to move particular types of data. We look at this situation a bit closer in the later parts of this article. Now, let's capture some packets so that we can see the output and what information we are gathering here.
For a basic capture, use the following:
[root@server ~]# tcpdump -i any
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
18:42:10.914742 IP server.example.com.55018 > 216.126.233.109.ntp: NTPv4, Client, length 48
18:42:10.915759 IP server.example.com.59656 > router.charter.net.domain: 1974+ PTR? 109.233.126.216.in-addr.arpa. (46)
18:42:10.959920 IP router.charter.net.domain > server.example.com.59656: 1974 ServFail 0/0/0 (46)
18:42:10.960089 IP server.example.com.42825 > router.charter.net.domain: 1974+ PTR? 109.233.126.216.in-addr.arpa. (46)
*** Shortened output ***
^C
17 packets captured
18 packets received by filter
1 packet dropped by kernel
Here, we use the -i
flag to indicate the interface, any
, in this case that we want to listen on. Note that tcpdump
continues to capture packets until an interrupt signal is given via Ctrl+C. The other option you can use is the -c
flag to limit the number of packets captured. This limit is honestly one of the best ways to use the command in my opinion, because much of the time you are trying to figure out connectivity (which can be diagnosed rather quickly).
[root@server ~]# tcpdump -i any -c 3
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
18:51:54.509439 IP server.example.com.58249 > 216.126.233.109.ntp: NTPv4, Client, length 48
18:51:54.510413 IP server.example.com.46277 > router.charter.net.domain: 9710+ PTR? 109.233.126.216.in-addr.arpa. (46)
18:51:54.570112 IP 216.126.233.109.ntp > server.example.com.58249: NTPv4, Server, length 48
3 packets captured
10 packets received by filter
1 packet dropped by kernel
I have another quick tip for troubleshooting with tcpdump
. By default, it resolves IP addresses and port numbers into names (see above). In large environments where naming schemes are a bit tricky, you can disable this resolution to get IP addresses and port numbers. From a technical troubleshooting perspective, I find this far less confusing. It also makes searching through the output of your capture a bit easier. We use the -nn
flag to disable name and port resolution:
[root@server ~]# tcpdump -i any -c3 -nn
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
19:56:12.804327 IP 10.0.3.15.41153 > 64.79.100.196.123: NTPv4, Client, length 48
19:56:12.867789 IP 64.79.100.196.123 > 10.0.3.15.41153: NTPv4, Server, length 48
19:56:13.739885 IP 10.0.3.15.50968 > 216.126.233.109.123: NTPv4, Client, length 48
3 packets captured
3 packets received by filter
0 packets dropped by kernel
Other useful filters
To filter by IP address:
$ sudo tcpdump host x.x.x.x
To filter by interface:
$ sudo tcpdump eth0
To filter by source:
$ sudo tcpdump src x.x.x.x
To filter by destination:
$ sudo tcpdump dst x.x.x.x
To filter by protocol:
$ sudo tcpdump icmp
There are a vast number of options and filters to really hone your captures down to only the most useful traffic. If you need more info, check out the man page or other online sources.
Practical application
As I stated earlier, during my time as a Support Engineer, I spent a considerable amount of time troubleshooting data replication from production to disaster recovery environments. A customer would often have a designated replication interface set up to send traffic from their production server over to a replication target server. Let's walk through what that looks like at a basic level and use tcpdump
to verify traffic from our source interface to the destination.
Pre-requisite conditions
- Source server - 172.25.1.5
- Destination server - 172.25.1.4
- Replication interface - enp0s8
In theory, when we start a data replication job, we should see traffic flow from 172.25.1.5 to 172.25.1.4.
I started a quick "replication" (ping
) job in the background on the source server. Next, we run tcpdump
on the source and destination servers to see if we are receiving the traffic.
From the source:
[root@server ~]# tcpdump -i enp0s8 -nn
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s8, link-type EN10MB (Ethernet), capture size 262144 bytes
23:17:55.347648 ARP, Request who-has 172.25.1.4 tell 172.25.1.5, length 28
23:17:56.378194 ARP, Request who-has 172.25.1.4 tell 172.25.1.5, length 28
23:17:57.398294 ARP, Request who-has 172.25.1.4 tell 172.25.1.5, length 28
23:17:58.422946 ARP, Request who-has 172.25.1.4 tell 172.25.1.5, length 28
23:17:59.448412 ARP, Request who-has 172.25.1.4 tell 172.25.1.5, length 28
^C
5 packets captured
5 packets received by filter
0 packets dropped by kernel
You can see the above traffic is only a request—we are not getting a response from the target. In a real scenario, this would indicate an issue on the destination, as we can clearly see the traffic being sent over the source interface.
After I turned the destination interface back on...
Here are the traffic captures from the source and the destination after the issue was identified and resolved.
Source:
[root@server ~]# tcpdump -i enp0s8 -c3 -nn
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s8, link-type EN10MB (Ethernet), capture size 262144 bytes
23:22:04.694919 IP 172.25.1.5 > 172.25.1.4: ICMP echo request, id 7168, seq 911, length 64
23:22:04.695346 IP 172.25.1.4 > 172.25.1.5: ICMP echo reply, id 7168, seq 911, length 64
23:22:05.724968 IP 172.25.1.5 > 172.25.1.4: ICMP echo request, id 7168, seq 912, length 64
3 packets captured
3 packets received by filter
0 packets dropped by kernel
Destination:
[root@client ~]# tcpdump -i enp0s8 -c3 -nn
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s8, link-type EN10MB (Ethernet), capture size 262144 bytes
23:22:13.916519 IP 172.25.1.5 > 172.25.1.4: ICMP echo request, id 7168, seq 920, length 64
23:22:13.916569 IP 172.25.1.4 > 172.25.1.5: ICMP echo reply, id 7168, seq 920, length 64
23:22:14.935720 IP 172.25.1.5 > 172.25.1.4: ICMP echo request, id 7168, seq 921, length 64
3 packets captured
4 packets received by filter
0 packets dropped by kernel
A closer look at the output shows that the traffic is sent from the source server to the target server successfully.
Summary
We learned the what and why of tcpdump
today, as well as options to know. We even took a look at a real-world use case. Obviously, there are other considerations in a live environment. Everything from interfaces being down (like in this example) to bad passwords over the wire can cause failures. Only experience teaches you these lessons, but at least now you know how to get started in identifying an issue. My next article explores filter options a bit further, how to output your captures to a file, and use grep
to find the needle in your haystack. Be sure to keep an eye out for that.
For more detailed information about using tcpdump, check out this introduction to using tcpdump at the Linux command line on Opensource.com, and see see the official documentation in the Red Hat Customer Portal for a better understanding of tcpdump inside of a Red Hat Enterprise Linux environment.
[ Network getting out of control? Check out Network automation for everyone, a free book from Red Hat. ]
About the author
Tyler is the Sr. Community Manager at Enable Sysadmin, a submarine veteran, and an all-round tech enthusiast! He was first introduced to Red Hat in 2012 by way of a Red Hat Enterprise Linux-based combat system inside the USS Georgia Missile Control Center. Now that he has surfaced, he lives with his wife and son near Raleigh, where he worked as a data storage engineer before finding his way to the Red Hat team. He has written numerous technical documents, from military procedures to knowledgebase articles and even some training curricula. In his free time, he blends a passion for hiking, climbing, and bushcraft with video games and computer building. He is loves to read and enjoy a scotch or bourbon. Find him on Twitter or on LinkedIn.
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit