Subscribe to the feed

There are plenty of overloaded terms in the tech industry, and proxy is one of them. When most people think of a proxy, they imagine a webpage that serves as a gateway to an intranet, or a suspicious-looking webpage designed to unblock social media sites on a school or work network. The second kind of proxy is one in which an individual user navigates to a page, provides a token (a user name, or a password, or a URL they want to visit), and are then forwarded on to some wider space (a network, intranet, or the internet). The inverse of that kind of proxy is the reverse proxy, which accepts all traffic and forwards it to a specific resource, like a server or container. NGINX is one of the most popular open source web servers that is also a reverse proxy.


Traffic flowing through a proxy and a reverse proxy.

 

This article focuses on reverse proxies.

Networking with reverse proxies

Reverse proxies are powerful methods of redirecting traffic to specific parts of your infrastructure. They can be used purely as a convenience, for added security, or for load balancing. Let’s take at some possibilities in more detail.

Load balancing

A reverse proxy is useful for network load balancing. If you maintain several physical servers capable of answering requests for services, you can list each server as an upstream group. When one server becomes overtaxed, NGINX uses the next server in the group to distribute the load.

Container routing

If you are running services in containers and intend for them all to be available within a single domain, you can use a reverse proxy to seamlessly direct incoming requests to the appropriate container.

Bot mitigation

Bots range from innocent and respectful web indexers to attack drones from infected computers, and a reverse proxy can help both detect and block the ones you don’t want on your server. With NGINX as your reverse proxy server, you can:

  • Restrict access to locations that may be obvious targets for brute-force attacks, reducing the effectiveness of DDOS attacks by limiting the number of connections and the download rate per IP address.
  • Cache pre-rendered versions of popular pages to speed up page load times.
  • Interfere with other unwanted traffic when needed.

Installing NGINX

You can install NGINX from your Linux distribution’s software repository or BSD ports tree. For example, on Centos, Fedora, or Red Hat Enterprise Linux:

$ sudo dnf install nginx

On Debian or Ubuntu use the following:

$ sudo apt install nginx

Tip: If you choose to install from source code or to create your own package, beware that NGINX has both an open source and a non-open version, so be sure to use the .org site, and not the .com site.

After installing, start the NGINX service and enable it to launch at boot time:

$ sudo systemctl enable --now nginx

Navigate to http://localhost in your browser to verify that the web server is running as expected:

The NGINIX welcome page.

Setting up your lab

Assume that the default NGINX test page, for the purpose of this article, is the default target for incoming traffic. You want to set up a reverse proxy to redirect traffic from the default location to something else, whether it’s a separate physical server, a dedicated virtual machine, or a container. For this article, use the built-in Python HTTP server as the imaginary server or container to which you want to redirect traffic.

First, create a simple HTML page in a dedicated directory:


$ mkdir fakeserver
$ cd fakeserver
$ echo "<html><head><title>Python3 http.server</title></head> \
<body><p>You have been drawn here by a mysterious force.</p> \
</body></html>" > index.html

Still within the fakeserver directory, start a Python 3 HTTP server in a separate terminal window:

$ python3 -m http.server 8888

Open a web browser and navigate to localhost:8888 to test your fake server:

The Python 3 HTTP server’s test page.

On systems with SELinux, this exercise violates SELinux permissions. Specifically, while you (the user) are allowed to access port 8888 through a web browser, NGINX is not. This is a sane and secure default since websites generally run on either port 80 (HTTP) or 443 (HTTPS). You usually want SELinux to prevent NGINX from accessing port 8888 or any other non-standard port since, by default, it should never attempt to.

However, for this article it’s helpful to use a non-standard port to demonstrate NGINX’s capabilities and flexibility, so you must allow NGINX to access whatever port it wants to access:

$ sudo setsebool -P httpd_can_network_connect 1

If you’re not yet familiar with SELinux, you can find out more in the excellent article Your visual how-to guide for SELinux policy enforcement.

Configuring your reverse proxy

The proxy module’s proxy_pass function provides NGINX with a reverse proxy. To use proxy_pass, you must first know where you want to direct traffic. In real life, this answer varies depending upon your infrastructure, but for the purpose of this article your destination is your Python 3 fake server (located at port 8888), and not the NGINX test page.

Each web server is defined in a server block within /etc/nginx/nginx.conf. In the server, you define a location to set a specific URI. In this case, set the server’s root directory, and use the proxy_pass function to make the root of your web server a proxy to your temporary Python web server.

The default NGINX configuration file, depending on your system, may interfere with this test; so before continuing, move it to a safe place:

$ sudo mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf-distro

Create a new /etc/nginx/nginx.conf file (indentation doesn’t matter, but semi-colons and brackets do):

worker_processes  1;

events {
    worker_connections  1024; 
}
    
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

    server {
        listen       80;
        server_name  localhost;

        location / {
	    proxy_pass http://localhost:8888/;
            index  index.html index.htm;
            } # end location
        } # end server
    } # end http

In real life, the value for server_name would be your fully-qualified domain name (FQDN)—such as example.com—and the value for proxy_pass would be the location you want your redirected traffic to end up. A proxy location can be referred to by its IP address or its FQDN.

Testing your reverse proxy

Your reverse proxy has been created, so it’s time to test. Before
restarting NGINX, test your configuration file:

$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

If NGINX returned errors, open the configuration file and fix your syntax. When your NGINX tests are successful, restart it with systemctl:

$ sudo systemctl restart nginx

Launch a web browser and navigate to localhost. You are now proxied to your Python server on port 8888.

You can try further NGINX options and modules at your leisure, but when you’re finished experimenting, be sure to set your SELinux HTTP server Boolean back to 0:

$ sudo setsebool -P httpd_can_network_connect 0

Proxying for success

There are many more proxy options available within NGINX. For example, in real life you should adjust the amount of caching your proxy performs: less for active and dynamic pages that change often and more for pages that change infrequently and get heavy traffic. So, install NGINX and experiment. Get familiar with its configuration options, and you’ll be able to shape how the world accesses the data you are serving. For more details about NGINX and its many configuration options, read nginx.org/en/docs.


About the author

Seth Kenlon is a Linux geek, open source enthusiast, free culture advocate, and tabletop gamer. Between gigs in the film industry and the tech industry (not necessarily exclusive of one another), he likes to design games and hack on code (also not necessarily exclusive of one another).

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech