As you know from my previous two articles, Linux troubleshooting: Setting up a TCP listener with ncat and The ncat command is a problematic security tool for Linux sysadmins, netcat
is a command that is both your best friend and your worst enemy. And this article further perpetuates this fact with a look into how ncat
delivers a useful, but potentially dangerous, option for creating a port redirection link. I show you how to set up a port or site forwarding link so that you can perform maintenance on a site while still serving customers.
The scenario
You need to perform maintenance on an Apache installation on server1, but you don't want the service to appear offline for your customers, which in this scenario are internal corporate users of the labor portal that records hours worked for your remote users. Rather than notifying them that the portal will be offline for six to eight hours, you've decided to create a forwarding service to another system, server2, while you take care of server1's needs.
This method is an easy way of keeping a specific service alive without tinkering with DNS or corporate firewall NAT settings.
Server1: Port 8088
Server2: Port 80
The steps
To set up this site/service forward, you need to satisfy the following prerequisites:
ncat-nmap
package (should be installed by default)- A functional duplicate of the server1 portal on server2
- Root or
sudo
access to servers 1 and 2 for firewall changes
If you've cleared these hurdles, it's time to make this change happen.
The implementation
Configuring ncat
in this way makes use of named pipes, which is an efficient way to create this two-way communication link by writing to and reading from a file in your home directory. There are multiple ways to do this, but I'm going to use the one that works best for this type of port forwarding.
Create the named pipe
Creating the named pipe is easy using the mkfifo
command.
$ mkfifo svr1_to_svr2
$ file svr1_to_svr2
svr1_to_svr2: fifo (named pipe)
I used the file
command to demonstrate that the file is there and it is a named pipe. This command is not required for the service to work. I named the file svr1_to_svr2
, but you can use any name you want. I chose this name because I'm forwarding from server1 to server2.
Create the forward service
Formally, this was called setting up a Listener-to-Client relay, but it makes a little more sense if you think of this in firewall terms, hence my "forward" name and description.
$ ncat -k -l 8088 < svr1_to_svr2 | ncat 192.168.1.60 80 > svr1_to_svr2 &
Issuing this command drops you back to your prompt because you put the service into the background with the &
. As you can see, the named pipe and the service are both created as a standard user. I discussed the reasons for this restriction in my previous article, The ncat command is a problematic security tool for Linux sysadmins.
Command breakdown
The first part of the command, ncat -k -l 8088
, sets up the listener for connections that ordinarily would be answered by the Apache service on server1. That service is offline, so you create a listener to answer those requests. The -k
option is the keep-alive feature, meaning that it can serve multiple requests. The -l
is the listen option. Port 8088
is the port you want to mimic, which is that of the customer portal.
The second part, to the right of the pipe operator (|
), accepts and relays the requests to 192.168.1.60 on port 80. The named pipe (svr1_to_svr2
) handles the data in and out.
The usage
Now that you have your relay set up, it's easy to use. Point your browser to the original host and customer portal, which is http://server1:8088
. This automatically redirects your browser to server2 on port 80. Your browser still displays the original URL and port.
I have found that too many repetitive requests can cause this service to fail with a broken pipe message on server1. This doesn't always kill the service, but it can. My suggestion is to set up a script to check for the forward
command, and if it doesn't exist, restart it. You can't check for the existence of the svr1_to_svr2
file because it always exists. Remember, you created it with the mkfifo
command.
The caveat
The downside of this ncat
capability is that a user could forward traffic to their own duplicate site and gather usernames and passwords. The malicious actor would have to kill the current port listener/web service to make this work, but it's possible to do this even without root access. Sysadmins have to maintain vigilance through monitoring and alerting to avoid this type of security loophole.
The wrap up
The ncat
command has so many uses that it requires one article per feature to describe each one. This article introduced you to the concept of Listener-to-Client relay, or service forwarding, as I call it. It's useful for short maintenance periods but should not be used for permanent redirects. For those, you should edit DNS and corporate firewall NAT rules to send requests to their new destinations. You should remind yourself to turn off any ncat
listeners when you're finished with them as they do open a system to compromise. Never create these services with the root user account.
[ Make managing your network easier than ever with Network automation for everyone, a free book from Red Hat. ]
About the author
Ken has used Red Hat Linux since 1996 and has written ebooks, whitepapers, actual books, thousands of exam review questions, and hundreds of articles on open source and other topics. Ken also has 20+ years of experience as an enterprise sysadmin with Unix, Linux, Windows, and Virtualization.
Follow him on Twitter: @kenhess for a continuous feed of Sysadmin topics, film, and random rants.
In the evening after Ken replaces his red hat with his foil hat, he writes and makes films with varying degrees of success and acceptance. He is an award-winning filmmaker who constantly tries to convince everyone of his Renaissance Man status, also with varying degrees of success and acceptance.
More like this
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit