Subscribe to our blog

The ability to provide services to customers continuously and with minimal-to-no outages is critical in today’s world. The Red Hat Enterprise Linux (RHEL) High Availability Add-On can help you achieve that goal by improving the reliability, scalability and availability of production systems. High availability (HA) clusters do this by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative.

In this post, I’ll demonstrate using the ha_cluster RHEL system role to configure an HA cluster running an Apache HTTP server with shared storage in active/passive mode.

RHEL system roles are a collection of Ansible roles and modules that are included in RHEL to help provide consistent workflows and streamline the execution of manual tasks. For more information on RHEL HA clustering, please refer to the Configuring and managing high availability clusters documentation.

Environment overview

In my example environment, I have a control node system named controlnode and two managed nodes, rhel8-node1, rhel8-node2, all of them running RHEL 8.6. Both managed nodes are powered through an APC power switch with a hostname of apc-switch.

I want to create a cluster named rhel8-cluster, consisting of nodes rhel8-node1 and rhel8-node2. The cluster will be running an Apache HTTP server in active/passive mode with a floating IP address serving pages from an ext4 file system mounted on an LVM (logical volume management) logical volume. Fencing will be provided by apc-switch.

Both cluster nodes are connected to shared storage with an ext4 file system mounted on an LVM logical volume. An Apache HTTP server has been installed and configured on both nodes. Refer to Configuring an LVM volume with an ext4 file system in a Pacemaker cluster and Configuring an Apache HTTP Server chapters in Configuring and managing high availability clusters document.

I’ve already set up an Ansible service account on all three servers, named ansible. I have SSH key authentication set up so that the ansible account on controlnode can log in to each of the nodes. In addition, the ansible service account has been configured with access to the root account via sudo on each node. I’ve also installed the rhel-system-roles and ansible packages on controlnode. For more information on these tasks, refer to the Introduction to RHEL system roles post.

Defining the inventory file and role variables

From the controlnode system, the first step is to create a new directory structure:

[ansible@controlnode ~]$ mkdir -p ha_cluster/group_vars

These directories will be used as follows:

  • The ha_cluster directory will contain the playbook and the inventory file.
  • The ha_cluster/group_vars file will contain variable files for inventory groups that will apply to hosts in the respective Ansible inventory groups.

I need to define an Ansible inventory file to list and group the hosts that I want the ha_cluster system role to configure. I’ll create the inventory file at ha_cluster/inventory.yml with the following content:

---
all:
  children:
    rhel8_cluster:
      hosts:
        rhel8-node1:
        rhel8-node2:

The inventory defines an inventory group named rhel8_cluster, and assigns the two managed nodes to this group.

Next, I’ll define the role variables that will control the behavior of the ha_cluster system role when it runs. The README.md file for the ha_cluster role is available at /usr/share/doc/rhel-system-roles/ha_cluster/README.md and contains important information about the role, including a list of available role variables and how to use them.

One of the variables that needs to be defined for the ha_cluster role is the ha_cluster_hacluster_password variable. It defines the password for the hacluster user. I'll use Ansible Vault to encrypt its value so that it is not stored in plain text.

[ansible@controlnode ~]$ ansible-vault encrypt_string 'your-hacluster-password' --name ha_cluster_hacluster_password
New Vault password:
Confirm New Vault password:
ha_cluster_hacluster_password: !vault |
          $ANSIBLE_VAULT;1.1;AES256 376135336466646132313064373931393634313566323739363365616439316130653539656265373663636632383930323230343731666164373766353161630a303434316333316264343736336537626632633735363933303934373666626263373962393333316461616136396165326339626639663437626338343530360a39366664336634663237333039383631326263326431373266616130626333303462386634333430666333336166653932663535376538656466383762343065
Encryption successful

Replace your-hacluster-password with the password of your choice. Once you run the command, it will ask for a Vault password which can be used to decrypt the variable when the playbook is run. After you type a Vault password and type it again to confirm, the encrypted variable will be shown in the output. The variable will be placed in the variable file, which will be created in the next step.

Now, I’ll create a file that will define variables for my cluster nodes listed in the rhel8_cluster inventory group by creating a file at ha_cluster/group_vars/rhel8_cluster.yml with the following content:

---
ha_cluster_cluster_name: rhel8-cluster
ha_cluster_hacluster_password: !vault |
       $ANSIBLE_VAULT;1.1;AES256
       3761353364666461323130643739313936343135663237393633656164393161306535
       39656265373663636632383930323230343731666164373766353161630a3034343163
       3331626434373633653762663263373536393330393437366662626337396239333331
       6461616136396165326339626639663437626338343530360a39366664336634663237
       3330393836313262633264313732666161306263333034623866343334306663333361
       66653932663535376538656466383762343065
ha_cluster_fence_agent_packages:
  - fence-agents-apc-snmp
ha_cluster_resource_primitives:
  - id: myapc
    agent: stonith:fence_apc_snmp
    instance_attrs:
      - attrs:
          - name: ipaddr
            value: apc-switch
          - name: pcmk_host_map
            value: rhel8-node1:1;rhel8-node2:2
          - name: login
            value: apc
          - name: passwd
            value: apc
  - id: my_lvm
    agent: ocf:heartbeat:LVM-activate
    instance_attrs:
      - attrs:
          - name: vgname
            value: my_vg
          - name: vg_access_mode
            value: system_id
  - id: my_fs
    agent: ocf:heartbeat:Filesystem
    instance_attrs:
      - attrs:
          - name: device
            value: /dev/my_vg/my_lv
          - name: directory
            value: /var/www
          - name: fstype
            value: ext4
  - id: VirtualIP
    agent: ocf:heartbeat:IPaddr2
    instance_attrs:
      - attrs:
          - name: ip
            value: 198.51.100.3
          - name: cidr_netmask
            value: 24
  - id: Website
    agent: ocf:heartbeat:apache
    instance_attrs:
      - attrs:
          - name: configfile
            value: /etc/httpd/conf/httpd.conf
          - name: statusurl
            value: http://127.0.0.1/server-status
ha_cluster_resource_groups:
  - id: apachegroup
    resource_ids:
      - my_lvm
      - my_fs
      - VirtualIP
      - Website

This will cause the ha_cluster role to create a cluster named rhel8-cluster on the nodes.

There will be one fence device, myapc, of type stonith:fence_apc_snmp, defined in the cluster. The device is accessible at the apc-switch IP address with login and password apc and apc, respectively. Cluster nodes are powered through this device: rhel8-node1 is plugged into socket 1, and rhel8-node2 is plugged into socket 2. Since no other fence devices will be used, I specified the ha_cluster_fence_agent_packages variable. This will override its default value and thus prevent other fence agents from being installed.

Four resources will be running in the cluster:

  • The LVM volume group my_vg will be activated by the my_lvm resource of type ocf:heartbeat:LVM-activate.
  • The ext4 filesystem will be mounted from the shared storage device /dev/my_vg/my_lv onto /var/www by the my_fs resource of type ocf:heartbeat:Filesystem.
  • The floating IP address 198.51.100.3/24 for the HTTP server will be managed by the VirtualIP resource of type ocf:heartbeat:IPaddr2.
  • The HTTP server will be represented by a Website resource of type ocf:heartbeat:apache, with its configuration file stored at /etc/httpd/conf/httpd.conf and status page for monitoring available at http://127.0.0.1/server-status.

All of the resources will be placed into an apachegroup group to make them run on a single node and start in the specified order: my_lvm, my_fs, VirtualIP, Website.

Creating the playbook

The next step is creating the playbook file at ha_cluster/ha_cluster.yml with the following content:

---
- name: Deploy a cluster
  hosts: rhel8_cluster
  roles:
    - rhel-system-roles.ha_cluster

This playbook calls the ha_cluster system role for all of the systems defined in the rhel8_cluster inventory group.

Running the playbook

At this point, everything is in place, and I’m ready to run the playbook. For this demonstration, I’m using a RHEL control node, and I’ll run the playbook from the command line. I’ll use the cd command to move into the ha_cluster directory and then use the ansible-playbook command to run the playbook.

[ansible@controlnode ~]$ cd ha_cluster/
[ansible@controlnode ~]$ ansible-playbook -b -i inventory.yml --ask-vault-pass ha_cluster.yml

I specify that the ha_cluster.yml playbook should be run, that it should run as root (the -b flag), the inventory.yml file should be used as my Ansible inventory (the -i flag), and that I should be prompted to supply the vault password to decrypt the ha_cluster_hacluster_password variable (the --ask-vault-pass flag).

After the playbook completes, I need to verify that there were no failed tasks:

Screenshot of a Linux terminal showing no failed tasks

Validating the configuration

To validate that the cluster has been set up and is running resources, I’ll log in to rhel8-node1 and display cluster status:

Screenshot of a linux terminal displaying the cluster status of rhel8-node1

I also check this on rhel8-node2, which displays the same output.

Next, I open a web browser and connect to IP 198.51.100.3 to verify that the website is accessible.

To test the failover, I pull out a network cable from rhel8-node1. After a while, the cluster performs the failover and fences rhel8-node1. I login to rhel8-node2, and display the cluster status. It shows all resources have migrated from rhel8-node1 to rhel8-node2. I also reload the website in the web browser to verify it is still accessible.

Screenshot of a linux terminal displaying the cluster status of rhel8-node2

I reconnect rhel8-node1 to the network and reboot it once again so that it rejoins the cluster.

Conclusion

The ha_cluster RHEL system role can help you quickly and consistently configure RHEL HA clusters running a variety of workloads. In this post, I covered how to use the role to configure an Apache HTTP server running a website from a shared storage in an active/passive mode.

Red Hat offers many RHEL system roles that can help automate other important aspects of your RHEL environment. To explore additional roles, check out this list of available RHEL system roles and start managing your RHEL servers in a more efficient, consistent and automated manner today.

Want to learn more about the Red Hat Ansible Automation Platform? Check out our e-book, The automation architect's handbook.

 


Über den Autor

Tomas Jelinek is a Software Engineer at Red Hat with over seven years of experience with RHEL High Availability clusters.

Read full bio

Nach Thema durchsuchen

automation icon

Automatisierung

Erfahren Sie das Neueste von der Automatisierungsplattform, die Technologien, Teams und Umgebungen verbindet

AI icon

Künstliche Intelligenz

Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen

cloud services icon

Cloud Services

Mehr erfahren über Managed Cloud Services

security icon

Sicherheit

Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren

edge icon

Edge Computing

Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen

Infrastructure icon

Infrastruktur

Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen

application development icon

Anwendungen

Entdecken Sie unsere Lösungen für komplexe Anwendungsherausforderungen

Original series icon

Original Shows

Interessantes von den Experten, die die Technologien in Unternehmen mitgestalten