To provision new Linux VMs with a working DNS configuration that fits our environment, I created the small Ansible role
resolv.conf, which I would like to introduce below.
This role is minimalistic and includes only the mandatory directories and files:
# tree roles/resolv.conf roles/resolv.conf/ ├── handlers │ └── main.yml ├── tasks │ └── main.yml └── templates └── resolv.conf.j2
To be able to manage the file
/etc/resolv.conf with Ansible, I first had to stop the
NetworkManager from doing so. For this task, I used the Ansible module
ini_file, which sets the required option
dns=none in the section
--- - name: make sure line 'dns=none' is set in /etc/NetworkManager/NetworkManager.conf ini_file: path: /etc/NetworkManager/NetworkManager.conf state: present no_extra_spaces: yes section: main option: dns value: none owner: root group: root mode: 0644 backup: yes notify: - reload NetworkManager
The second task uses the template module to create the target configuration from the contents of
roles/resolv.conf/templates/resolv.conf.j2, and place it in
/etc/resolv.conf on the target node:
--- - name: deploy resolv.conf template template: src: roles/resolv.conf/templates/resolv.conf.j2 dest: /etc/resolv.conf owner: root group: root mode: 0644 backup: yes notify: - reload NetworkManager
Currently, my template contains static text. I could have used the copy module to copy this file to the target node. However, I used the template module to keep the possibility open for me to create the content dynamically by using variables.
notify: a handler named
reload NetworkManager is called twice in this playbook. I’ll cover handlers in the next section.
Handlers are used to trigger actions that only execute if a task makes changes to the target node. These handlers are only processed at the end of a playbook and are only executed once, even if they were notified of changes by several tasks.
In the example described in this text, the handler named
reload NetworkManager executes the defined task, but only if one of the two tasks (or both) from
tasks/main.yml has led to a change on the target node:
# cat resolv.conf/handlers/main.yml --- - name: reload NetworkManager service: name: NetworkManager state: reloaded
Note that the handlers are not executed until all tasks have been successfully processed. In some cases this fact can make troubleshooting more difficult. I found an example in the book Ansible: Up and Running, by Rene Moser and Lorin Hochstein, which I would like to share here. Imagine the following procedure:
- You are running a playbook.
- One of the tasks uses notify on changes.
- In a following task, an error occurs that causes the processing to abort.
- You fix the problem and run the playbook again.
The task from step two already made its changes successfully. When the playbook is executed again, its status will therefore be
OK and not
CHANGED. However, the handler was not executed, because the processing aborted before that point. The handler will not be triggered when the playbook is executed again, since the task required for this no longer leads to any changes in the target node.
Hochstein writes that handlers are mostly used to restart services or reload their configuration. This can, of course, also be achieved without the use of handlers by explicitly restarting a service at the end of the Playbook. Which is the better way, may everyone decide for themselves.
This was one of my first Ansible roles ever. Whether this is the smartest way to create the DNS configuration, or whether there are even more elegant approaches, I don’t know. At least this solution seems to be robust and hasn’t let me down yet.