Select a language
In this post:
Learn how to set up the vpn RHEL System Role to quickly and easily implement host-to-host VPN connections across your RHEL environment in an automated manner.
In today's world where organizations frequently use multiple cloud providers, datacenters, and systems in edge environments, secure communication between these distributed systems is essential. Host-to-host VPN tunnels allow for encrypted communication between systems, and are frequently used when traffic needs to traverse untrusted networks such as the public internet.
While host-to-host VPN tunnels can be implemented on Red Hat Enterprise Linux (RHEL) manually, this can be time consuming and error-prone. Red Hat introduced the VPN RHEL System Role in RHEL 8.5 to provide an automated solution to implement host-to-host VPN connections, as well as opportunistic mesh VPNs.
RHEL System Roles are a collection of Ansible roles and modules that are included in RHEL to help provide consistent workflows and streamline the execution of manual tasks. For more information on VPNs in RHEL, refer to the configuring a VPN with IPsec documentation.
In my example environment, I have a control node system named controlnode running RHEL 8 and four managed nodes: rhel8-server1, rhel8-server2, rhel7-server1, and rhel7-server2 (the first two running RHEL 8, and the other two running RHEL 7). I would like to configure a host to host VPN tunnel between each of these four systems, as shown:
Each host-to-host VPN tunnel connects two hosts together and must be configured on both hosts. This means that each of the four managed nodes will have three host-to-host tunnel configurations implemented.
I’ve already set up an Ansible service account on the five servers, named ansible, and have SSH key authentication set up so that the ansible account on controlnode can log in to each of the systems. In addition, the ansible service account has been configured with access to the root account via sudo on each host.
I’ve also installed the rhel-system-roles and ansible packages on controlnode. For more information on these tasks, refer to the Introduction to RHEL System Roles post.
In this environment, I’m using a RHEL 8 control node, but you can also use Ansible automation controller or Red Hat Satellite as your RHEL system roles control node. Ansible automation controller provides many advanced automation features and capabilities that are not available when using a RHEL control node.
For more information on Ansible automation controller and its functionality, refer to this overview page. For more information on using Satellite as your RHEL system roles control node, refer to Automating host configuration with Red Hat Satellite and RHEL System Roles.
Defining the inventory file and role variables
From the controlnode system, the first step is to create a new directory structure:
[ansible@controlnode ~]$ mkdir -p vpn/group_vars
These directories will be used as follows:
The vpn directory will contain the playbook and the inventory file.
The vpn/group_vars file will contain variable files for inventory groups that will apply to hosts in the respective Ansible inventory groups.
I need to define an Ansible inventory file to list and group the hosts that I want the vpn System Role to configure. I’ll create the main inventory file at vpn/inventory.yml with the following content:
all: children: host_to_host_vpn: hosts: rhel8-server1: rhel8-server2: rhel7-server1: rhel7-server2:
If using Ansible automation controller as your control node, this Inventory can be imported into Red Hat Ansible Automation Platform via an SCM project (example GitHub or GitLab) or using the awx-manage Utility as specified in the documentation.
This inventory defines an inventory group named host_to_host_vpn, and assigns the four managed nodes to this inventory group.
Next, I’ll define the role variables that will control the behavior of the vpn System Role when it runs. The README.md file for the vpn role, available at /usr/share/doc/rhel-system-roles/vpn/README.md, contains important information about the role including a list of available role variables and how to use them.
I’ll create a file that will define variables for my managed nodes listed in the host_to_host_vpn inventory group by creating a file at vpn/group_vars/host_to_host_vpn.yml with the following content:
vpn_connections: - hosts: rhel8-server1: rhel8-server2: rhel7-server1: rhel7-server2: auto: start
In this example, I’m using the vpn_connections variable to specify information on my desired VPN configuration, including the list of hosts that should be configured with host-to-host VPN tunnels and setting the auto variable to start.
Creating the playbook
The next step is creating the playbook file at vpn/vpn.yml with the following content:
- name: Open Firewall for ipsec hosts: host_to_host_vpn tasks: - firewalld: service: ipsec permanent: yes immediate: yes state: enabled - hosts: host_to_host_vpn roles: - rhel-system-roles.vpn
The first task, Open Firewall for ipsec, runs on the four managed nodes in the host_to_host_vpn inventory group, and opens the firewall for the ipsec service.
The second task, also running on the four managed nodes in the host_to_host_vpn group, runs the vpn RHEL System Role.
If you are using Ansible automation controller as your control node, you can import this Ansible playbook into Red Hat Ansible Automation Platform by creating a Project, following the documentation provided here. It is very common to use Git repos to store Ansible playbooks. Ansible Automation Platform stores automation in units called Jobs which contain the playbook, credentials and inventory. Create a Job Template following the documentation here.
Running the playbook
At this point, everything is in place, and I’m ready to run the playbook. For this demonstration, I’m using a RHEL control node and will run the playbook from the command line. I’ll use the cd command to move into the vpn directory, and then use the ansible-playbook command to run the playbook.
[ansible@controlnode ~]$ cd vpn [ansible@controlnode vpn]$ ansible-playbook vpn.yml -b -i inventory.yml
I specify that the vpn.yml playbook should be run, that it should escalate to root (the -b flag), and that the inventory.yml file should be used as my Ansible inventory (the -i flag).
After the playbook completes, I verified that there were no failed tasks:
If you are using Ansible automation controller as your control node, you can launch the job from the automation controller web interface.
Validating the configuration
I can login to each of the managed nodes to validate that the VPN configuration is set up correctly. I’ll start by logging in to rhel8-server1:
[ansible@rhel85beta vpn]$ ssh rhel8-server1
Once logged on, I can run sudo ipsec trafficstatus to see the three host to host VPN tunnels:
[ansible@rhel8-server1 ~]$ sudo ipsec trafficstatus 006 #5: "rhel8-server1-to-rhel7-server1", type=ESP, add_time=1636554445, inBytes=0, outBytes=0, id='@rhel7-server1' 006 #6: "rhel8-server1-to-rhel7-server2", type=ESP, add_time=1636554445, inBytes=0, outBytes=0, id='@rhel7-server2' 006 #4: "rhel8-server1-to-rhel8-server2", type=ESP, add_time=1636554445, inBytes=0, outBytes=0, id='@rhel8-server2'
Each line shows the inBytes and outBytes as 0. To generate show traffic, I can ping each of the other hosts:
[ansible@rhel8-server1 ~]$ for host in rhel7-server1 rhel7-server2 rhel8-server2; do ping -c 4 $host; done PING rhel7-server1 (192.168.0.57) 56(84) bytes of data. 64 bytes from rhel7-server1 (192.168.0.57): icmp_seq=1 ttl=64 time=0.907 ms 64 bytes from rhel7-server1 (192.168.0.57): icmp_seq=2 ttl=64 time=0.860 ms 64 bytes from rhel7-server1 (192.168.0.57): icmp_seq=3 ttl=64 time=0.832 ms 64 bytes from rhel7-server1 (192.168.0.57): icmp_seq=4 ttl=64 time=0.771 ms --- rhel7-server1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3004ms rtt min/avg/max/mdev = 0.771/0.842/0.907/0.057 ms PING rhel7-server2 (192.168.0.58) 56(84) bytes of data. 64 bytes from rhel7-server2 (192.168.0.58): icmp_seq=1 ttl=64 time=0.802 ms 64 bytes from rhel7-server2 (192.168.0.58): icmp_seq=2 ttl=64 time=0.965 ms 64 bytes from rhel7-server2 (192.168.0.58): icmp_seq=3 ttl=64 time=0.733 ms 64 bytes from rhel7-server2 (192.168.0.58): icmp_seq=4 ttl=64 time=0.829 ms --- rhel7-server2 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3004ms rtt min/avg/max/mdev = 0.733/0.832/0.965/0.086 ms PING rhel8-server2 (192.168.0.56) 56(84) bytes of data. 64 bytes from rhel8-server2 (192.168.0.56): icmp_seq=1 ttl=64 time=0.918 ms 64 bytes from rhel8-server2 (192.168.0.56): icmp_seq=2 ttl=64 time=0.735 ms 64 bytes from rhel8-server2 (192.168.0.56): icmp_seq=3 ttl=64 time=0.749 ms 64 bytes from rhel8-server2 (192.168.0.56): icmp_seq=4 ttl=64 time=0.789 ms --- rhel8-server2 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3006ms rtt min/avg/max/mdev = 0.735/0.797/0.918/0.080 ms
If I run sudo ipsec trafficstatus again, I can now see each of the VPN tunnels shows 336 inBytes and outBytes:
[ansible@rhel8-server1 ~]$ sudo ipsec trafficstatus 006 #5: "rhel8-server1-to-rhel7-server1", type=ESP, add_time=1636554445, inBytes=336, outBytes=336, id='@rhel7-server1' 006 #6: "rhel8-server1-to-rhel7-server2", type=ESP, add_time=1636554445, inBytes=336, outBytes=336, id='@rhel7-server2' 006 #4: "rhel8-server1-to-rhel8-server2", type=ESP, add_time=1636554445, inBytes=336, outBytes=336, id='@rhel8-server2'
The vpn RHEL System Role can help you quickly and easily implement VPN connections across your RHEL environment in an automated manner. While this blog post focused on host-to-host VPN tunnels, the vpn RHEL System Role also supports opportunistic mesh VPNs.
We offer many RHEL System Roles that can help automate other important aspects of your RHEL environment. To explore additional roles, review the list of available RHEL System Roles and start managing your RHEL servers in a more efficient, consistent and automated manner today.
Want to learn more about Red Hat Ansible Automation Platform? Check out our e-book, "The automated enterprise: Unify people and processes."
About the author
Brian Smith is a Product Manager at Red Hat focused on RHEL automation and management. He has been at Red Hat since 2018, previously working with Public Sector customers as a Technical Account Manager (TAM).