Skip to main content

Automate network testing with this open source Linux tool

Use iperf3 to troubleshoot bandwidth, timing, protocol, and other problems on your TCP/IP network.
Image
Orange fiber optic cable

TCP/IP networking is a complex topic, and it gets really tricky when you are trying to nail down issues with performance or troubleshoot a problem. It helps to have tools that can probe your system and confirm your suspicions or—better yet—that there are no problems.

One of these tools is the open source iperf3. Here's its description from GitHub:

iperf is a tool for active measurements of the maximum achievable bandwidth on IP networks. It supports tuning various parameters related to timing, protocols, and buffers. Each test reports the measured throughput/bitrate, loss, and other parameters.

This article shows you how to:

  • Inspect bandwidth issues between two endpoints with iperf 3
  • Test User Datagram Protocol (UDP) multicast connectivity (which Precision Time Protocol and other protocols use for time synchronization)
  • Uncover cyclic redundancy check (CRC) errors on a network interface
  • Use ethtool and tcpdump to confirm that a bad network interface or cable drops traffic
  • Write more complex scripts using Python 3

I will also briefly explain CPU affinity and why it might matter to iperf3.

Get started with iperf3

To follow along with this tutorial, you will need:

  • A Linux distribution (I executed my examples on a Fedora server)
  • The ability to run commands as root (using sudo, for example)
  • A basic understanding of networking principles

Run the command to install iperf3. On Fedora:

$ sudo dnf install -y iperf3

[ Get a cheat sheet of Linux utilities and commands for managing servers and networks. ]

Iperf3 works by running a client and server that talk to each other. Here are some terms to know before you begin using it:

  • Throughput measures how many packets arrive at the destinations successfully.
  • Network bandwidth is a network's maximum transfer throughput capacity.
  • Jitter is the time delay between when a signal is transmitted and when it is received. Good connections have a consistent response time.
  • TCP stands for Transmission Control Protocol. It is a reliable protocol that guarantees the arrival of packets in the same order as they were sent by using a handshake.
  • UDP doesn't have a handshake protocol like TCP. It is faster than TCP, but if a packet is lost, it won't be resent, and there is no guarantee that the packets will arrive in the order sent.

In the demonstration in this article:

  • The client and server bind to the wired Ethernet interface. (I will not use the wireless interfaces as they are more prone to jitter due to external noise.)
  • My test uses the defaults (port, TCP connection unless overridden with the flag --udp on the client).

The demonstration confirms whether:

  • The switch between the two machines supports 1,000Mbits/sec connections and you configured the interfaces at that capacity.
  • Full-duplex mode is enabled to send and receive data on the card simultaneously. You will confirm this later in the article with another tool called ethtool.

Without further delay, I'll get started.

Measure bandwidth and jitter

Here are the initial commands on the server:

[server ~]$ sudo ethtool eth0|rg -e 'Speed|Duplex'
	Speed: 1000Mb/s
	Duplex: Full

[server ~]$ ip --oneline address|rg 192
2: eth0    inet 192.168.1.11/24 brd 192.168.1.255 scope global dynamic eth0\       valid_lft 2090sec preferred_lft 2090sec

[server ~]$ iperf3 --server --bind 192.168.1.11 -affinity 1
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

And now the client:

[client ~]$ sudo ethtool eno1|rg -e 'Speed|Duplex'
	Speed: 1000Mb/s
	Duplex: Full

[client ~]$ iperf3 --client raspberrypi --bind 192.168.1.28 --affinity 1
Connecting to host raspberrypi, port 5201
[  5] local 192.168.1.28 port 47609 connected to 192.168.1.11 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   111 MBytes   932 Mbits/sec    0   2.79 MBytes       
[  5]   1.00-2.00   sec   110 MBytes   923 Mbits/sec    0   2.98 MBytes       
...     
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1021 MBytes   857 Mbits/sec    0             sender
[  5]   0.00-9.95   sec  1020 MBytes   860 Mbits/sec                  receiver

iperf Done.

I'll break down the results:

  • Zero retries (Retr column). That is good and expected.
  • Bitrate is around 860 Mbits/sec. The link speed is close to the theoretical bandwidth. Switches have a limit on how much traffic the backplane can handle.
  • TCP guarantees losses packet transmission, so jitter is not reported here.

If you reverse the test (the client is now the server), you should see similar results.

[ Network getting out of control? Check out Network automation for everyone, a complimentary book from Red Hat. ]

Test UDP bandwidth

To test UDP, do the following on the client only:

[client ~]$ iperf3 --client raspberrypi --bind 192.168.1.28 --udp --affinity 1
Connecting to host raspberrypi, port 5201
[  5] local 192.168.1.28 port 47985 connected to 192.168.1.11 port 5201
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-1.00   sec   129 KBytes  1.05 Mbits/sec  91  
[  5]   1.00-2.00   sec   127 KBytes  1.04 Mbits/sec  90  
[  5]   2.00-3.00   sec   129 KBytes  1.05 Mbits/sec  91  
...
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec  1.25 MBytes  1.05 Mbits/sec  0.000 ms  0/906 (0%)  sender
[  5]   0.00-9.99   sec  1.25 MBytes  1.05 Mbits/sec  0.028 ms  0/906 (0%)  receiver

Here are the results:

  • The bitrate is much closer to the theoretical bandwidth. Also, there is no packet loss, which is great.
  • UDP doesn't guarantee packet loss, so lost datagrams and jitter are reported (and they have good values).

You may wonder what that --affinity flag is. It isn't really needed here to test the bandwidth on this simple example, but it gives me an excuse to talk about affinity.

Quick detour: CPU affinity, NUMA, isolcpus

If you were curious and checked the iperf documentation and examples, you probably saw references to CPU or processor affinity.

So what is it? From Wikipedia:

Processor affinity, or CPU pinning or "cache affinity" enables the binding and unbinding of a process or a thread to a central processing unit (CPU) or a range of CPUs, so that the process or thread will execute only on the designated CPU or CPUs rather than any CPU.

Why would you want to pin a process to a specific group of CPUs?

No instance with pinned CPUs can use the CPUs of another pinned instance, thus preventing resource contention between instances. Non-uniform memory access (NUMA) allows multiple CPUs to share L1, L2, and L3 caches and main memory.

You can use NUMA hardware to ensure you always use the memory closest to the CPU.

What does a server with several NUMA nodes look like? You can find out with lscpu| rg NUMA:

[client ~]$ lscpu|rg NUMA
NUMA node(s):                    2
NUMA node0 CPU(s):               0-7
NUMA node1 CPU(s):               8-15

This is a 16-CPU server with two NUMA nodes (this is a simplified example, a machine with HyperThreading enabled looks different. Depending on the application, you may decide to disable it).

Remember that you can use CPU affinity not just to increase networking performance but also disk performance.

[ Learn how to manage your Linux environment for success. ]

Coming back to iperf3, you can pin it to a specific CPU using -A or --affinity. For example, CPU 3 (numbered 0 to n-1) looks like this:

# Equivalent of running iperf3 with numactl: /bin/numactl --physcpubind=2 iperf3 -c remotehost
iperf3 --affinity 2 --client remotehost

Remember that you may also need to tell the operating system to avoid running host processes on these CPUs, so if you use Grubby, you can do this with isolcpus:

# Find the default kernel
$ sudo grubby --default-kernel

# Use that information and add isolcpus parameter, then reboot
$ sudo grubby --update-kernel=/boot/vmlinuz-5.14.18-100.fc33.x86_64 --args="isolcpus=2"
sudo shutdown -r now 'Updated kernel isolcpus, need to reboot'

Again, this is not needed to troubleshoot a networking issue, but it may come in handy if you want to make iperf3 behave like one of your fine-tuned applications.

[ Learn practical steps and best practices for enterprise network automation implementation: Download the Enterprise automation in a DevOps world checklist. ]

Optimization is a complex topic, so get a cup of coffee (or two) and get ready to start reading.

Use iperf3 to detect dropped packets and CRC errors

A CRC error is caused by a faulty physical device (network card, switch port, cable) or a mismatch in full- and half-duplex configurations between two devices. These are sometimes difficult to track on switches with cut-through mode, where the switch forwards received errors to all ports.

This is a simplified scenario to ensure a new network card connection works without CRC or received/transmitted (Rx/Tx) errors (meaning the card, cable, and switch port are OK).

With that in mind, you could do a simple test to ensure the link health is good:

  • Capture the status of the CRC and dropped packet errors on the network card being tested.
  • Run iperf3 in TCP mode for a longer time than usual.
  • Recapture the network card CRC statistics.

If the difference is greater than zero, then:

  1. Check the full-duplex mode on both the card and switch port (ethtool).
  2. Replace the cable.
  3. Reseat or replace the network card.
  4. Change the port on the switch.

You get the picture; iperf3 will help "burn" the link and trigger any unwanted behavior before using this interface in production.

Here is the process in action. Say you take the first snapshot on the iperf3 server:

[server ~]$ sudo ethtool --statistics  eth0| rg -i -e 'dropped|error'
     rx_errors: 0
     tx_errors: 0
     rx_dropped: 0
     tx_dropped: 0
     rxq0_errors: 0
     rxq0_dropped: 0
     rxq1_errors: 0
     rxq1_dropped: 0
     rxq2_errors: 0
     rxq2_dropped: 0
     rxq3_errors: 0
     rxq3_dropped: 0
     rxq16_errors: 0
     rxq16_dropped: 0

Then the client:

[client ~]$ sudo ethtool --statistics  eno1| rg -i -e 'dropped|errors'
     tx_errors: 0
     rx_errors: 0
     align_errors: 0

Run the iperf3 server:

[server ~]$ iperf3 --server --bind 192.168.1.11
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

Run iperf3 on the client for 120 seconds:

[client ~]$ iperf3 --client raspberrypi --bind 192.168.1.28 --time 120 
Connecting to host raspberrypi, port 5201
[  5] local 192.168.1.28 port 41337 connected to 192.168.1.11 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   111 MBytes   934 Mbits/sec    0   2.94 MBytes       
[  5]   1.00-2.00   sec   111 MBytes   933 Mbits/sec    0   2.95 MBytes       
[  5]   2.00-3.00   sec   111 MBytes   933 Mbits/sec    0   2.95 MBytes       
...
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-120.00 sec  11.0 GBytes   787 Mbits/sec    0             sender
[  5]   0.00-119.70 sec  11.0 GBytes   789 Mbits/sec                  receiver

# Measure again ...
[client ~]$ sudo ethtool --statistics  eno1| rg -i -e 'dropped|errors'
     tx_errors: 0
     rx_errors: 0
     align_errors: 0

Now I'll talk about another useful tool to get the network interface statistics, ethtool.

What is ethtool?

As Wikipedia explains:

ethtool is the primary means in Linux kernel-based operating systems (primarily Linux and Android) for displaying and modifying the parameters of network interface controllers (NICs) and their associated device driver software from application programs running in userspace.

Here are a couple of trivia questions for you after you're done checking the ethtool man page:

  1. What does the sudo ethtool -g eno1 command do?
  2. And this one? sudo ethtool -s eno1 speed 1000 duplex full autoneg on

The ethtool utility is another tool that you should have in your toolset.

Automate iperf3 with Python 3

You might notice that iperf3 has a library that allows you to integrate the tool with other languages, including Python:

[client ~]$ rpm -qil iperf3|rg libiperf
/usr/lib64/libiperf.so.0
/usr/lib64/libiperf.so.0.0.0
/usr/share/man/man3/libiperf.3.gz

There are several bindings available for Python:

  • iperf3-python has an API to integrate iperf3 with Python, using those bindings.
  • The Python ethtool module is available but marked as deprecated, but I will use it for what this demonstration needs.

I will not cover the API here but rather point you to the source code of a Python script that uses iperf3 and ethtool to detect network errors (as I did manually above). You can see it running below. Please check out the repository and run the script. You will be amazed at how easy it is to automate a few tasks with Python.

Here it is in action:

What can you do next?

Learning never stops, so here are a few pointers and observations to keep you going:

  • Fasterdata has more examples of using iperf with different parameters.
  • Still curious about how to use CPU affinity? Check Red Hat's systemd documentation. It also has handy examples.
  • Be aware that isolcpus is considered deprecated, and cpuset is recommended. Refer to this Stack Overflow discussion to see how to play with cpuset.
  • You now know how to write your own troubleshooting scripts with the iperf3 Python API. You should probably write an iperf3 server that can show the results using a web browser (maybe combine it with FastAPI?).
Topics:   Networking   Troubleshooting  
Author’s photo

Jose Vicente Nunez

Proud dad and husband, software developer and sysadmin. Recreational runner and geek. More about me

Try Red Hat Enterprise Linux

Download it at no charge from the Red Hat Developer program.