Login / Registre-se Account

In today’s IT environments, organizations are having to manage an ever-growing quantity of systems. These systems need to scale within and outside of the traditional datacenter. This requires organizations to depend ever more on automation to perform tasks. Deploying and managing an operating system like Red Hat Enterprise Linux (RHEL) can be time-consuming without automation, with administration and maintenance tasks taking significantly longer to complete. 

RHEL System Roles are a collection of Ansible roles and modules that can help automate the management and configuration of RHEL Systems. RHEL System Roles can help provide consistent and repeatable configuration, reduce technical burdens, and streamline administration. In this post, we’ll show you how to use the know-how of Red Hat with RHEL System Roles, so you can spend more time doing work that's more valuable to the business and less time reinventing the wheel. 

RHEL System Roles overview

Administrators can select from a library of common services and configuration tasks provided by RHEL System Roles. This interface enables managing system configurations across multiple versions (RHEL 8, RHEL 7, and in some cases RHEL 6) and supports the execution of manual tasks consistently across physical, virtual, private cloud, and public cloud environments.

RHEL System Roles are supported with your RHEL subscription and are packaged as RPMs included with RHEL. However, if you have an Red Hat Ansible Automation Platform subscription and utilize Ansible Tower, you can also access the latest  RHEL System Roles from Ansible Automation Hub for use in Tower. Likewise, if you have Red Hat Smart Management subscriptions and utilize Red Hat Satellite, you can initiate RHEL System Roles from Satellite. 

You will find a wide variety of RHEL System Roles:

Security-related roles:

  • selinux allows for configuration of SELinux.

  • certificate can manage TLS/SSL certificate issuance and renewal.

  • tlog configures session recording.

  • nbde_client and nbde_server configure network bound disk encryption.

  • ssh and sshd configure the SSH client and server, respectively.

  • crypto_policies configures the system-wide cryptographic policies. 

Configuration-related roles:

  • timesync configures time synchronization.

  • network configures networking.

  • kdump configures the kernel crash dump.

  • storage configures local storage.

  • kernel_settings configure kernel settings.

  • metrics configures system metrics (using Performance Co-Pilot).

  • logging configures logging (rsyslog).

  • postfix (tech preview) configures the postfix email server.

  • ha_cluster (tech preview) manages high availability clustering.

Workload-related roles*:

  • SAP related roles that assist with implementing the SAP workload.

*We have an idea that additional workload-specific roles would be useful, such as one for Microsoft SQL Server, but we are still in the evaluation and planning stage for this idea.

For an up-to-date list of available roles, as well as a support matrix that details which versions of RHEL are supported by each role, refer to this page

Control node 

RHEL System Roles utilize Ansible, which has a concept of a control node. The control node is where Ansible and the RHEL System Roles are installed. The control node needs to have connectivity over SSH to each of the hosts that will be managed via the RHEL System Roles (which are referred to as managed nodes). The managed nodes do not need to have the RHEL System Roles or Ansible installed on them.

Intro to RHEL sys roles fig 1 control nodes
  • Control node: The system with Ansible and the RHEL System Roles installed.

  • Managed nodes: The systems being managed by RHEL System Roles.

There are several options for what can be used as the control node for RHEL System Roles: Ansible Tower, Red Hat Satellite, or a RHEL host. 

If you have an Ansible Automation Platform subscription, it is recommended to use Ansible Tower as the control node. Ansible Tower offers advanced features such as a visual dashboard, job scheduling, notifications, workflows, advanced inventory management, etc. For more information on Ansible Tower, refer to this site

For people utilizing Red Hat Satellite, it is also possible to use Satellite as the control node. Please refer to this previous post in which I cover an overview of how to set this up. However, there is currently a limitation regarding using Satellite as the control node: Satellite 6.x is only supported to run on RHEL 7 hosts, and currently, not all of the newer RHEL System Roles are available in the RHEL 7 version of the rhel-system-roles RPM. 

You can also utilize a RHEL host as the control node. It is generally recommended that you use the latest version of RHEL on the control node so that you have access to the latest RHEL System Role content. You could also utilize a RHEL 7 host as a control node, however the rhel-system-roles RPM available on RHEL 7 currently doesn’t contain all of the newer roles, as previously mentioned. It is possible to utilize the Ansible Automation Hub to download the latest version of the RHEL System Roles on RHEL 7 (or RHEL 8), however this requires an Ansible Automation Platform subscription.  

If utilizing a RHEL 8 or RHEL 7 host as your control node, follow the steps listed in this article to install Ansible and RHEL System Roles on the control node. 

SSH configuration

The control node needs to have SSH access to each of the managed hosts. If you have firewalls on your network, this might involve ensuring that port 22 is open between the control node and each of the managed hosts. 

In addition the control node will need to be able to authenticate over SSH to each managed node and escalate privileges to the root account. 

If you are utilizing Ansible Tower as your control node, you probably already have this set up as this is a basic prerequisite to run a playbook on hosts. 

After the Ansible inventory is set up, the Ansible ping module can validate that this SSH configuration was set up correctly (this will be covered later in the post). 

If you are utilizing Red Hat Satellite as your control node, it utilizes the remote execution configuration to connect and authenticate to hosts. If you don’t already have remote execution configured in your Satellite environment, refer to the Satellite documentation on how to set this up. 

If you are utilizing a RHEL 8 or RHEL 7 host as your control node, you’ll need to:

  • Determine which account you would like to use on the control node and managed hosts.

    • While it’s possible to use the root account, it is generally recommended to create and utilize a service account.

  • Generate a SSH key for this user on the control node with the ssh-keygen command.

  • Distribute the public key to each of the managed hosts (which the ssh-copy-id command can help with).

  • If you are using a service account, you’ll need to configure sudo access on each managed node so that the service account can escalate its privileges to root. 

Inventory file

Ansible needs to be provided a list of managed nodes that it should run the RHEL System Roles on. This is done via an Ansible inventory file. 

If you are utilizing Ansible Tower as your control node, there are a wide variety of options for inventory. For more information, refer to the Ansible Tower documentation

For people using Satellite as the control node, you can assign Ansible roles to either individual hosts or to groups of hosts via host groups. For more information refer to the Satellite documentation

If you are utilizing a RHEL 8 or RHEL 7 host as your control node, you’ll need to define an inventory in a text file. 

The simplest inventory file would be one that simply lists one host per line in the file, as shown in this example:

$ cat inventory
rhel8-server1
rhel8-server2
rhel7-server1
rhel7-server2 

It is also possible to define groups in the inventory file as in the following example where the prod and dev groups are defined: 

$ cat inventory
[prod]
rhel8-server1
rhel7-server1

[dev]
rhel8-server2
rhel7-server2

These previous two examples were with INI formatted inventories. It is also possible to define inventories in YAML format. The following example defines the same prod and dev groups, but in a YAML formatted inventory:

$cat inventory.yml
all:
  children:
    prod:
      hosts:
        rhel8-server1:
        rhel7-server1:
    dev:
      hosts:
        rhel8-server2:
        rhel7-server2:

Now that we’ve defined an inventory file, we can utilize the Ansible ping module to validate that our SSH configuration was set up correctly and that our control node can communicate with and connect to each managed host. The command in the following example tells Ansible to use the ping module, the inventory file named inventory.yml, and to connect to all of the hosts defined in the inventory:

RHEL System Roles ansible ping module example

In this example, Ansible successfully connected to all four hosts I had defined in the inventory file.

You can also define variables in the inventory, however, we’ll cover that later when we talk about variables.

Ansible inventories are very powerful and flexible, and I’ve just covered the basics. For more information on Ansible inventory files, refer to the Ansible documentation

Role variables

Ansible variables allow us to specify our desired configuration to the RHEL System Roles. For example, if we are using the timesync role to set up time synchronization, we need the ability to tell the timesync role which NTP servers in our environment should be utilized by our managed nodes. 

Each role has a documented list of role variables under its README.md file which is accessible at /usr/share/doc/rhel-system-roles/<role_name>/README.md.

For example, the timsync role specifies that the timesync_ntp_servers variable is used to list out the NTP servers that should be used. There are also additional variables for the timesync role that are documented in this file, such as timesync_ntp_provider, timesync_min_sources, etc. 

Ansible variables are another powerful feature of Ansible, and again, I’ve only covered the basics here. For more information, refer to the Ansible documentation

If you are using Satellite as your RHEL System Roles control node, refer to this blog post for information and examples on how to define the role variables in Satellite. 

There are two main locations where the Ansible variables can be specified: in the inventory, or directly in the playbooks. I’ll cover these two options in the next sections. 

Defining role variables directly in playbook

The role variables can be specified directly in the playbook that calls the RHEL System Role. For example, the playbook below defines the timesync role variables to specify 3 NTP servers and that they should utilize the iburst option. 

$ cat timesync.yml 
- hosts: all
  become: true
  vars:
    timesync_ntp_servers:
      - hostname: ntp1.example.com
        iburst: yes
      - hostname: ntp2.example.com
        iburst: yes
      - hostname: ntp3.example.com
        iburst: yes
  roles:
    - rhel-system-roles.timesync

We could initiate this playbook run with the ansible-playbook command, specifying the playbook file name and our previously created inventory file:

$ ansible-playbook timesync.yml -i inventory.yml 

While defining the role variables directly in the playbook file is easy to do and convenient, it will require you to edit the playbook every time the role variables need to be updated (for example, if your NTP server was replaced and you needed to update all of the hosts to utilize the new NTP server). It is considered a better practice to define the role variables outside of the playbook so that the playbook doesn’t have to be frequently edited and updated. 

Defining variables in the inventory

It is also possible to define the role variables in the Ansible inventory rather than in the playbook. As previously mentioned, this will avoid the need to frequently edit the playbook itself. 

By defining the variables in the inventory, we can also easily define variables based on the inventory groups. 

In this example, I’d like to use the timesync role to configure time synchronization on my servers, and I would like to define one set of NTP servers for the hosts in the prod group, and a different set of NTP servers for the hosts in the dev group. 

I’ll start by creating a inventory directory:

$ mkdir inventory
$ cd inventory

Within the inventory directory, I’ll define a inventory file named inventory.yml, defining two servers in the prod group, and two servers in the dev group:

$ cat inventory.yml 
all:
  children:
    prod:
      hosts:
        rhel8-server1:
        rhel7-server1:
    dev:
      hosts:
        rhel8-server2:
        rhel7-server2:

Within the inventory directory, I’ll create a group_vars directory:

$ mkdir group_vars
$ cd group_vars

And within the group_vars directory, I’ll create both a prod.yml file and a dev.yml file to define the variables for hosts in the prod inventory group, and dev inventory group, respectively. This will result in the servers within the dev group being configured with one set of NTP servers, and the servers within the prod group being configured with a different set of NTP servers. 

$ cat dev.yml 
    timesync_ntp_servers:
      - hostname: dev-ntp1.example.com
        iburst: yes
      - hostname: dev-ntp2.example.com
        iburst: yes
      - hostname: dev-ntp3.example.com
        iburst: yes

$ cat prod.yml 
    timesync_ntp_servers:
      - hostname: prod-ntp1.example.com
        iburst: yes
      - hostname: prod-ntp2.example.com
        iburst: yes
      - hostname: prod-ntp3.example.com
        iburst: yes

I can then create a new playbook that is much shorter and simpler because it no longer contains the variables:

$ cd ../..
$ cat timesync2.yml 
- hosts: all
  become: true
  roles:
    - rhel-system-roles.timesync

I can then run this playbook with ansible-playbook, and specify the inventory directory should be used as the inventory source:

$ ansible-playbook timesync2.yml -i inventory/

Once the playbook runs the two servers in the dev group will be configured with the dev NTP servers, and the two servers in the prod group will be configured with the prod NTP servers. 

Understanding the next steps

RHEL System Roles were developed because automation is essential when it comes to managing complex environments. Automation can help you deliver consistency and keep up with increasing demands. Here are some next steps that you can take to start planning how you can implement RHEL System Roles in your environment. 


About the author

Brian Smith is a Product Manager at Red Hat focused on RHEL automation and management.  He has been at Red Hat since 2018, previously working with Public Sector customers as a Technical Account Manager (TAM).