In this post:
The network and microsoft.sql.server RHEL System Roles can help you quickly and consistently implement network configurations and Microsoft SQL Server.
We covered how to use these roles to configure a bonded network interface, and then install and configure Microsoft SQL Server to listen on this bonded interface.
Start managing your RHEL servers in a more efficient, consistent and automated manner today. Review the list of available RHEL System Roles.
Automation can help increase efficiency, save time, and improve consistency, which is why Red Hat Enterprise Linux (RHEL) includes features that help automate many tasks. RHEL System Roles is a collection of Ansible content included in RHEL to help provide consistent workflows and streamline the execution of many manual tasks.
RHEL 8.5 introduced a new Microsoft SQL Server system role that automates the installation, configuration, and tuning of Microsoft SQL Server on RHEL. This new role, microsoft.sql.server, allows you to quickly and consistently implement Microsoft SQL Server on one or more RHEL systems.
Among the many other available RHEL System Roles is the network system role that allows you to automate the configuration of network settings. The network system role supports implementing a wide variety of network configurations, including bonded interfaces.
This post will walk through an example of using the network system role to implement a bonded active-backup network interface, and use the microsoft.sql.server role to implement a Microsoft SQL Server that is listening on the bonded network interface.
In my example environment, I have a control node system named controlnode running RHEL 8.5 and two managed nodes: rhel8-server1, rhel8-server2, both running RHEL 8.5.
Both rhel8-server1 and rhel8-server2 have a total of three network interfaces:
enp1s0: An existing interface that will not be changed by the network role. The control node will connect to rhel8-server1 and rhel8-server2 over SSH using this interface on each of the hosts.
enp7s0 and enp8s0: Currently unconfigured interfaces that I would like the network system role to configure as part of a network bond
The network bond on each of the two managed nodes should be implemented with this configuration:
Network bond named bond0 in an active-backup mode.
The enp7s0 and enp8s0 interfaces assigned to the bond.
The bond interface configured with IP address 192.168.100.70 (rhel8-server1) and 192.168.100.80 (rhel8-server2).
The bond interface configured with a gateway (192.168.100.1), DNS server (10.0.0.1), and DNS search path (example.com).
I would also like to have Microsoft SQL Server installed and configured on rhel8-server1 and rhel8-server2, with SQL Server configured to be listening on the bond0 interface’s IP address.
I’ve already set up an Ansible service account on the three servers, named ansible, and have SSH key authentication set up so that the ansible account on controlnode can log in to each of the systems. In addition, the ansible service account has been configured with access to the root account via sudo on each host.
I’ve also installed the rhel-system-roles, ansible-collection-microsoft-sql, and ansible packages on controlnode. For more information on these tasks, refer to the Introduction to RHEL System Roles post.
In this environment, I’m using a RHEL 8 control node, but you can also use an Ansible automation controller or Red Hat Satellite as your RHEL system roles control node. The Ansible automation controller (formerly Ansible Tower) provides many advanced automation features and capabilities that are not available when using a RHEL control node.
Visit the Ansible Automation Platform website or more information on the Ansible automation controller and its functionality. For more information on using Satellite as your RHEL System Roles control node, refer to Automating host configuration with Red Hat Satellite and RHEL System Roles.
Defining the inventory file and role variables
From the controlnode system, the first step is to create a new directory structure:
[ansible@controlnode ~]$ mkdir -p network_mssql/host_vars
These directories will be used as follows:
The network_mssql directory will contain the playbook and the inventory file.
The network_mssql/host_vars directory will contain variable files for each of the managed nodes to define the variables that should apply to each host.
I need to define an Ansible inventory file to list the hosts that I want the network and microsoft.sql.server System Roles to configure. I’ll create the inventory file at network_mssql/inventory.yml with the following content:
all: hosts: rhel8-server1: rhel8-server2:
This inventory lists the two managed nodes in my environment, rhel8-server1 and rhel8-server2.
If using Ansible automation controller as your control node, this Inventory can be imported into Red Hat Ansible Automation Platform via an SCM project (example GitHub or GitLab) or using the awx-manage Utility as specified in the documentation.
These files contain important information about the roles including a list of available role variables and how to use them. There are also corresponding README.html files in these directories if you would prefer to view the information in a web browser.
One of the variables I’ll need to define for the microsoft.sql.server role is the mssql_password variable, which defines the password for the sa database user. It is recommended to use Ansible Vault for any sensitive variables, such as passwords, rather than storing them in plain text.
To generate an encrypted string, I’ll run the following
ansible-vault command, specifying that the variable name is mssql_password, and the value that should be encrypted is Sql_test_passw0rd (which is what I would like to have set as the sa password).
[ansible@controlnode ~]$ ansible-vault encrypt_string 'Sql_test_passw0rd' --name 'mssql_password' New Vault password: Confirm New Vault password: mssql_password: !vault | $ANSIBLE_VAULT;1.1;AES256 396136386664363766383761373165623761393462363465646262326530313337373133343439613336336234643165303339396562386562333031313938300a626630363735303536653537313930663034366361633862343633346139326436353232386366363933323534383232373863616438653561333165633063380a38383934666230623133343735313736316131383932323932653232353366383434306536316462636433613361343937396139383833363439343334666239 Encryption successful
ansible-vault command will prompt for a Vault password that can be used to decrypt the variable value when the playbook is run. It will output the encrypted variable, which will be placed in the variable files that will be created in the next step.
Next, I’ll create files to define variables that will specify my desired network bond interface and Microsoft SQL Server configuration for each of the managed nodes by creating the following files:
network_mssql/host_vars/rhel8-server1.yml - will define variables for rhel8-server1
network_mssql/host_vars/rhel8-server2.yml - will define variables for rhel8-server2
These files will be identical other than the IP address configuration for the bond interface and the IP address that SQL Server will be configured to listen on.
The network_mssql/host_vars/rhel8-server1.yml file will contain the following:
network_connections: # Create bond interface, bond0 - name: bond0 type: bond bond: mode: active-backup interface_name: bond0 ip: address: - "192.168.100.70/24" gateway4: 192.168.100.1 dns_search: - example.com dns: - 10.0.0.1 state: up # Add enp7s0 interface to bond0 as bond0-port1 - name: bond0-port1 type: ethernet controller: bond0 interface_name: enp7s0 state: up # Add enp8s0 interface to bond0 as bond0-port2 - name: bond0-port2 type: ethernet controller: bond0 interface_name: enp8s0 state: up # Define Microsoft SQL Server role variables mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula: true mssql_accept_microsoft_cli_utilities_for_sql_server_eula: true mssql_accept_microsoft_sql_server_standard_eula: true mssql_edition: Evaluation # Specify that Microsoft SQL Server should listen on IP address of bond interface mssql_ip_address: 192.168.100.70 # mssql_password variable encrypted with Ansible Vault mssql_password: !vault | $ANSIBLE_VAULT;1.1;AES256 396136386664363766383761373165623761393462363465646262326530313337373133343439613336336234643165303339396562386562333031313938300a626630363735303536653537313930663034366361633862343633346139326436353232386366363933323534383232373863616438653561333165633063380a38383934666230623133343735313736316131383932323932653232353366383434306536316462636433613361343937396139383833363439343334666239
I’ll also create the network_mssql/host_vars/rhel8-server2.yml file with the same content other than the IP address (which will be 192.168.100.80 for rhel8-server2)
[ansible@controlnode ~]$ cd network_mssql/host_vars/ [ansible@controlnode host_vars]$ cp rhel8-server1.yml rhel8-server2.yml [ansible@controlnode host_vars]$ sed -i 's/192.168.100.70/192.168.100.80/g' rhel8-server2.yml
The variables within these files define the desired configuration for the network bond interface, and the desired configuration for Microsoft SQL Server, including accepting several EULA’s and specifying that an evaluation edition should be installed.
The mssql_ip_address variable specifies that each host should configure SQL Server to listen on the bond interfaces IP address (192.168.100.70 for rhel8-server1, and 192.168.100.80 for rhel8-server2). In this example, I’m installing Microsoft SQL Server 2019 (which is the default version the role installs); however, the role also supports installing Microsoft SQL Server 2017.
Note that in this example I am configuring a SQL Server without support for TLS encryption; however, the microsoft.sql.server role can also implement SQL Server with TLS. For more information, refer to the role variables that start with mssql_tls in the README.md file at /usr/share/microsoft/sql-server/README.md.
Creating the playbook
The next step is creating the playbook file at network_mssql/network_mssql.yml with the following content:
- name: Configure network bond with network system role hosts: all roles: - redhat.rhel_system_roles.network - name: Run Microsoft SQL Server system role hosts: all roles: - microsoft.sql.server - name: Open firewall for Microsoft SQL Server hosts: all tasks: - firewalld: port: 1433/tcp permanent: yes immediate: yes state: enabled
The first task, Configure network bond with network system role, calls the network RHEL System Role to configure the network bond, utilizing the previously defined role variables.
The second task, Run Microsoft SQL Server system role, calls the microsoft.sql.server system role to install, configure, and tune SQL Server utilizing the previously defined role variables.
The final task, Open firewall for Microsoft SQL Server, opens the firewall for TCP port 1433, which is the default port that SQL Server listens on.
If you are using Ansible automation controller as your control node, you can import this Ansible playbook into Red Hat Ansible Automation Platform by creating a Project, following the documentation provided here. It is very common to use Git repos to store Ansible playbooks. Ansible Automation Platform stores automation in units called Jobs which contain the playbook, credentials and inventory. Create a Job Template following the documentation here.
Running the playbook
At this point, everything is in place, and I’m ready to run the playbook. For this demonstration, I’m using a RHEL control node and will run the playbook from the command line. I’ll use the
cd command to move into the network_mssql directory, and then use the
ansible-playbook command to run the playbook.
[ansible@controlnode ~]$ cd network_mssql/ [ansible@controlnode network_mssql]$ ansible-playbook network_mssql.yml -b -i inventory.yml --ask-vault-pass
I specify that the network_mssql.yml playbook should be run, that it should escalate to root (the -b flag), the inventory.yml file should be used as my Ansible inventory (the -i flag), and that I should be prompted to supply the vault password to decrypt the mssql_password variable.
After the playbook completes, I need to verify that there were no failed tasks:
If you are using Ansible automation controller as your control node, you can launch the job from the automation controller web interface.
Validating the configuration
To validate the network bond interface configuration I’ll display the contents of /proc/net/bonding/bond0 on rhel8-server1:
[ansible@controlnode network_mssql]$ ssh rhel8-server1 cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: enp7s0 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Peer Notification Delay (ms): 0 Slave Interface: enp7s0 MII Status: up Speed: Unknown Duplex: Unknown Link Failure Count: 0 Permanent HW addr: 52:54:00:dc:00:87 Slave queue ID: 0 Slave Interface: enp8s0 MII Status: up Speed: Unknown Duplex: Unknown Link Failure Count: 0 Permanent HW addr: 52:54:00:f2:a0:86 Slave queue ID: 0
I also verified this on rhel8-server2, which displayed similar output.
To check the status of Microsoft SQL Server, from rhel8-server1, I’ll attempt to establish a connection to the SQL Server listening on rhel8-server2’s bonded network interface:
[root@rhel8-server1 ~]# /opt/mssql-tools/bin/sqlcmd -S 192.168.100.80 -U sa Password: 1> SELECT name FROM sys.databases 2> GO name -------------------------------------------------------------------------------------------------------------------------------- master tempdb model msdb (4 rows affected) 1> SELECT @@version 2> GO ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Microsoft SQL Server 2019 (RTM-CU14) (KB5007182) - 15.0.4188.2 (X64) Nov 3 2021 19:19:51 Copyright (C) 2019 Microsoft Corporation Enterprise Evaluation Edition (64-bit) on Linux (Red Hat Enterprise Linux 8.5 (Ootpa)) <X64> (1 rows affected) 1>
I also repeated this test in the opposite direction, connecting from rhel8-server2 to the SQL Server listening on rhel-server1’s bonded network interface with similar results.
The network and microsoft.sql.server RHEL System Roles can help you quickly and consistently implement network configurations and Microsoft SQL Server. In this post, I covered how to use these roles to configure a bonded network interface, and then install and configure Microsoft SQL Server to listen on this bonded interface.
Red Hat offers many RHEL System Roles that can help automate other important aspects of your RHEL environment. To explore additional roles, review the list of available RHEL System Roles and start managing your RHEL servers in a more efficient, consistent and automated manner today.
Want to learn more about the Red Hat Ansible Automation Platform? Check out our e-book, The automation architect's handbook.
About the author
Brian Smith is a Product Manager at Red Hat focused on RHEL automation and management. He has been at Red Hat since 2018, previously working with Public Sector customers as a Technical Account Manager (TAM).