订阅内容

What is a load balancer? A load balancer is an efficient way to distribute the network traffic among various backend servers. It is also known as a server farm or server pool. It distributes client requests or network load to target web servers. Load balancers work on the round-robin concept, which ensures high reliability and availability.

Load balancer in a network between clients and backend services

One scenario

You have a web server that can manage 100 clients at a time. Suddenly the requests to that particular server increase by 100 percent. It's likely that the website will crash or be terminated. To avoid this situation, set up a target web server. In this scenario, the client never goes to the target web server. Instead, their request goes to the master server, and the master server sends the request to the target web server. When the target web server replies to the master web server, which is known as a reverse proxy.

[ You might also like: Turn a Kubernetes deployment into a Knative service ]

Using HAProxy as a proxy

The port on the main web server is called the frontend port. HAProxy is an HTTP load balancer that can be configured as a reverse proxy. Here we'll look at how I configured HAProxy by using an Ansible playbook.

Check the system where you need to configure HAProxy

HAProxy is not installed on this system. You can confirm that with the following command:

rpm -q haproxy
Query for haproxy with rpm

Steps to configure HAProxy

Step 1 - Install HAProxy

To install HAProxy, you have to use a package module where you give the name of the service you want to install:

    - name: "Configure Load balancer"
      package:
        name: haproxy

Step 2 - Copy the configuration file for the reverse proxy

Copy the configuration file so that you can modify it:

cp /etc/haproxy/haproxy.cfg  /root/ws1/haproxy.cfg
display the haproxy.cfg file

Step 3 - Change frontend port and assign backend IPs

By default, the frontend is bound to port 5000. I changed the port number to 8080. I also applied a for loop to configure the backend IP. Now you can launch as many web servers as you need, and there is no need to manually configure the IP inside the /etc/httpd/httpd.conf. It will automatically fetch the IP from inventory.

backend app
   balance     roundrobin
{%  for i in groups ["web"] %}
   server  app1{{ loop.index }} {{ i}}:80 check
{% endfor %}
Configure HAProxy with haproxy.cfg

Step 4 - Copy haproxy.cfg to the managed node

Using template mode, copy the config file for HAProxy from the controller node to the managed node:

    - template:
        dest: "/etc/haproxy/haproxy.cfg"
        src: "/root/ws1/haproxy.cfg"

Step 5 - Start the service

Use the service module to start the HAProxy service:

    - service:
        name: "haproxy"
        state: restarted

Check the system where you need to install httpd webserver

For testing the HAProxy configuration, you will also configure httpd on your target node with the help of Ansible. To check that you don't already have httpd on your system, use the following command:

 rpm -q httpd
Use rpm to check for httpd

Step 1 - Install httpd

The package module is used to install httpd on the managed node:

    - name: "HTTPD CONFIGURE"
      package:
        name: httpd

Step 2 - Copy the webpage

The template module is used to copy your webpage from the source to the destination:

    - template:
        dest: "/var/www/html/index.html"
        src: "/root/ws1/haproxy.html"

Step 3 - Start the service

The service module is used to start the httpd service:

    - service:
        name: "haproxy"
        state: restarted

Complete the playbook to configure the reverse proxy

In this playbook, you have two different hosts with two different groups. One group is for the web server, and another is for the load balancer:

---
- hosts: web
  tasks:
    - name: "HTTPD CONFIGURE"
      package:
        name: httpd
    - template:
        dest: "/var/www/html/index.html"
        src: "/root/ws1/haproxy.html"
    - service:
        name: "httpd"
        state: restarted
- hosts: lb
  tasks:
    - name: "Configure Load balancer"
      package:
        name: haproxy
    - template:
        dest: "/etc/haproxy/haproxy.cfg"
        src: "/root/ws1/haproxy.cfg"
    - service:
        name: "haproxy"
        state: restarted

Run the playbook

ansible-playbook haproxy.yml
Run the playbook

Output

The playbook runs successfully, and the two target web servers can be accessed by the main web server using a load balancer.

Message indicates HAProxy installed on 192.168.0.120 Message indicates HAProxy installed on 192.168.0.110

[ Looking for more on system automation? Get started with The Automated Enterprise, a free book from Red Hat. ] 

Conclusion

The load balancer and reverse proxy have now been configured by Ansible. You can add a layer of protection and availability to your web services by adding HAProxy to your infrastructure. Be sure to check out the documentation for your specific target to learn more.


关于作者

Sarthak Jain is a Pre-Final Year Computer Science undergraduate from the University of Petroleum and Energy Studies (UPES). He is a cloud and DevOps enthusiast, knowing various tools and methodologies of DevOps. Sarthak also Mentored more than 2,000 students Regarding the Latest Tech trends through their community Dot Questionmark.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

按频道浏览

automation icon

自动化

有关技术、团队和环境 IT 自动化的最新信息

AI icon

人工智能

平台更新使客户可以在任何地方运行人工智能工作负载

open hybrid cloud icon

开放混合云

了解我们如何利用混合云构建更灵活的未来

security icon

安全防护

有关我们如何跨环境和技术减少风险的最新信息

edge icon

边缘计算

简化边缘运维的平台更新

Infrastructure icon

基础架构

全球领先企业 Linux 平台的最新动态

application development icon

应用领域

我们针对最严峻的应用挑战的解决方案

Original series icon

原创节目

关于企业技术领域的创客和领导者们有趣的故事