It all started when our quality engineering (QE) team realized that we were deploying our machines way too often. We couldn’t keep up with new build testing for our project.
To make things easier, we decided to collaborate with our virtualization QE team to help us understand how they automate their deployments. It turns out that they use Ansible roles written to perform specific tasks. After talking to them, I spent a good amount of time understanding and automating Red Hat Virtualization for our team.
This article outlines how it went. You should know that I had not worked with Jenkins Pipeline before, so my work may not reflect best practices. I am still learning.
Setting up the files
In addition to my internal files, I turned to GitHub’s oVirt Ansible section. The particularly useful Ansible roles here were:
In my jenkins_files
, I started by checking out the Git repo. Then, I installed the packages required on the Jenkins node. Note that some DNF and PIP packages are required.
Ultimately, it took a lot of trial and error to figure out the correct packages and versions to use. Once everything worked in my local environment (on my computer), I started translating it all to Jenkins Pipeline.
Debugging our configuration playbook
Once that task was done, we Beaker-reprovisioned the hosts with a fresh install of our distribution, configured it to export NFS storage, installed the required repos, and then finally ran our deployment playbook. Once the oVirt-hosted engine was deployed, we ran our configuration playbook to set up our Red Hat Virtualization and machines with networks, storage, hosts, and ISO files as needed.
An issue I ran into during this phase is that there was a task in one of the playbooks that used whoami
to find out what user it should use to ssh
from the hypervisor to the hosted engine, and that task chose my personal user login incorrectly instead of root
. I raised that issue with the ovirt-ansible-hosted-engine
contributors and they fixed it.
Once that problem was solved, I encountered another. It was getting past the previous failure step and deploying the hosted engine correctly, but when running from Jenkins it kept failing with a cryptic error message that was hardly telling me what was wrong. With some digging, I found out that the authentication between the hypervisor host and hosted engine was broken. I then added he_root_ssh_pubkey attribute
to my deployment and made sure the hypervisor had the correct private key deployed through Beaker. Doing this fixed another issue, and my playbook finally finished running engine-setup
and finished the deployment.
Adding further enhancements
We also wanted to set predictable passwords in our virtualization tools’ database ovirt_engine
and data warehouse ovirt_engine_history database
, but the ovirt-hosted-engine-setup
role did not allow that. I raised a pull request to fix that, and am hoping to get it merged soon.
Two more things that I added are the ability to copy custom SSH keys to known_hosts
on the hypervisors, and skipping deployment if we are already on the correct build. For the SSH part, I used the known_hosts
module to achieve that task. This community module was slightly confusing, so I wrote my first pull request against the Ansible Repo, extending the docs and examples for known_hosts module
. I also hope that this addition gets merged soon.
To restrict the re-deployments in case we were already on the same build, I decided to use the repodiff
command to pass the current build URL (fetched from the installed packages/repos) and the new build URL that was being requested by the pipeline’s user. The repodiff
command could look at both repos and compare the packages, and if there were no packages Added/Removed/Modified, the installed build and the build we were trying to install would be the same.
If this was the case, we used our bash script and playbooks to skip the installation and jump right to the last phase of the pipeline: Running smoke tests for our products.
Wrapping up
In the end, this process was painful and challenging, but also a rewarding experience to see it all come through. Take a look at one such pipeline run. Stage 3 failure is expected in some situations and is ignored.

Thanks for reading, if you are interested in learning more details on how I achieved this and how we are planning to use it, or details about any of the code, please feel free to contact me, as this article does not cover all the details, but is just an overview.
About the author
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit