In the hallowed halls of open source software development, there are a few pieces of software that stand out above all others as unwaivering standards of quality, collaboration and security. The Linux kernel qualifies for this category, but the real unbridled winner of the title would probably have to be the Apache Web Server. The ubiquity and power of this httpd server enabled the growth of RESTful traffic: When SOAP was trying to gain footing, everyone just decided to stop inventing new gateways and just throw all the traffic at Apache.

But it is the very success of the Apache httpd server that enabled NGINX to rise to become another contender for the title of "Most Useful and Widely Used Open Source Project." Because RESTful traffic is different from simple Web traffic in its nature, NGINX has grown to address all the next layer problems that have arisen for developers and network administrators who now need to read this REST traffic and schedule it for delivery to an ever shifting host of containers. And while it's been evolving, it's also expanded to support enormous scale for caching, balancing, and proxying.

10 years ago, just about every virtualized Web stack had Apache in it. Today, just about every containerized open hybrid cloud stack has NGINX in it. Whether it's load balancing, Web serving, caching, or reserve proxying, NGINX is like the Swiss Army Knife of handling RESTful traffic.

Therefore, it's important to have NGINX close at hand whenever you're working in the open hybrid cloud as an architect. We'd like to give a big thank you to NGINX for this excellent tutorial explaining how to get started with the NGINX Ingress Operator. Using NGINX to manage Kubernetes ingress is like building a highway off-ramp with a stop light right next to your brand new factory. Unleash the hordes while remaining in control. Go take a look!

About the author

Red Hatter since 2018, tech historian, founder of, serial non-profiteer.

Read full bio