At AnsibleFest 2021, we presented a talk about container task automation on Linux, where we learned a lot more than we expected about automation specifically and presenting generally. To share our knowledge, we've summarized our top 10 takeaways from this project, including things about containers, new Ansible tools, and preparing our presentation—including some technical lessons learned while building the demo.
Now, on to the lessons.
Part 1: The non-hacking lessons
The first part of this article shares the non-technical things that we learned.
1. Yes, you can deliver content people are interested in
We recommend that you consider presenting at conferences. Look for the calls for papers released by open source conferences throughout the year, and submit your talks to share your knowledge.
At any given conference, the audience's experience is quite diverse. Whatever your talk's depth and focus, it will fit many of the participants. Even at an introductory session like ours, we got very positive feedback from participants saying it was very helpful and they learned a lot.
Take a deep breath and submit your talks (even if you're still creating them) to conferences. You can deliver content people are interested in. They (and you, too!) learn and profit from them.
2. Define the project's basics
When preparing a presentation about automation, start simply by defining the project's basics—use cases, technology definitions, efficiency, software, and more.
- Use cases are essential in automation projects. Define them well, clarify the limits of what you want to implement—as well as what you do not intend to address. Our teaser video about this talk is an example: We said we would automate some container deployment procedures but did not intend to build a new Ansible-based container-orchestration platform.
- Define the technology you plan to deal with. In our case, containers, and specifically Podman on Red Hat Enterprise Linux (RHEL). RHEL is available for everyone through a developer's subscription, and it includes Podman. Podman is Open Containers Initiative (OCI)-compliant, so it provides enterprise Linux and OCI-compliant containers. A good start.
- Reuse, reduce, recycle; efficiency is key. Look up what Ansible content you have readily available. In our case, we used the Podman Ansible collection (and tools) to automate Podman.
- Start with current software versions to ensure you are up to date. In our case, that's RHEL 8.4 and Podman 3.2. Ansible Automation Platform (AAP) 2.0 was available with Ansible Core 2.11, so we went with that.
- Be future-oriented and use modern features. We looked into AAP's new Execution Environments and decided to use them.
[Learn what's new in Red Hat Ansible Automation Platform 2.]
3. Learn the technology you'll automate
Podman has grown quite a bit! So we needed to get a good understanding of the technology to present it effectively. Some ways to learn about Podman include:
- Check the getting started guide and some introductory tutorials.
- Have a look at containerized Podman.
- Know about the Podman remote API.
- Understand that Red Hat delivers more than Podman alone in Container Tools.
- Consider security with the
udica
SELinux policies for Podman. - Last but very important: Include (or at least talk to) someone with experience in the field. In this case, that was co-presenter Bob Baumgartner, who provided container know-how for the carrier instance.
4. Create a development workflow
You can use our development workflow as a model for your own Ansible project:
- Versioning: Yes, you'll need Git. So, create a GitLab or GitHub account if you don't already have one.
- Roles: Break your use cases into Ansible Roles for portability.
- Collections: Build your own Ansible collection, and add dependencies and your new roles.
- Publish your collection to Ansible Galaxy, the Red Hat Automation Hub, or to your company's Private Automation Hub.
- Build your Execution Environments (EEs) with your new collection and its dependencies.
- Create a container repo on your container registry to upload your EE image. (We used a publicly available repo on Quay.io and uploaded our locally built EE image instead of trying to get Quay.io to build it from the Containerfile. More on that below.)
- You might want to script or automate these steps because you'll rebuild now and then.
5. Define your use cases clearly
Automation projects rise and fall with their use cases, so it's important to define them clearly. Ask yourself: "What is the bare minimum target I want to automate?" and take it from there, extending the use case step by step. Keep an eye on the capabilities of the Ansible collection you use.
[New to Ansible? Read the quick-start guide to Ansible for Linux sysadmins.]
Bob built a fairly simple setup of two containers for our project: The database backend (PostgreSQL) and the application layer (Node.js frontend).
We defined:
- Usecase1: Rootless standalone containers
- Usecase2: Rootless containers sharing a pod
- Usecase3: Root containers communicating by way of a Podman network configuration (CNI)
- Usecase4: Root containers replaying an OpenShift Container Platform (OCP)-extracted Kubefile
6. Start simple, and adopt more complex features gradually
It's best to begin slowly and evolve your project. Here's how we did it for our presentation:
- Start with an ansible-playbook implementation.
- Move on to ansible-navigator with EEs pulled from the container registry.
- Move on to automation controller, import the EE, and run it there.
Part 2: The hacking lessons
And now, we'll share the technical things we discovered
7. About building the EE image on Quay
We originally planned to deliver a Containerfile definition of the EE to Quay.io and have Quay build our image. However, because building the EE required accessing the AAP 2.x EEs, and we couldn't get Quay to authenticate to registry.redhat.io quickly, we opted to build the EE image locally with ansible-builder and make our container repo on Quay.io public for pulls. It was simpler that way.
Also, when we moved on to Usecase2 and Usecase3 with pods, it took us a while to realize that the network configuration (portmapping) needed to be configured on the pod level, not the container level.
8a. ansible-navigator is very helpful at debugging
If you need to understand what ansible-navigator is doing within your container and how it is running ansible-playbook within it, look at the log-level debug (--ll debug
). Setting log-level debug allows you to see how ansible-navigator runs Podman and how Podman runs ansible-playbook:
Here's a screenshot:
And the code that you can copy and paste:
(navigator_venv) [ansible@AAP testenv]$ ansible-navigator --ll debug run usecase3_rootcontainers_networked.yml --vault-password-file /home/ansible/gitignored_secret --eei quay.io/kvegh/podautee -e @/home/ansible/vault-auth.yml -i inventory/hosts -m stdout -u ansible
PLAY [UseCase3 create rootcontainers with network] *****************************
TASK [Gathering Facts] *********************************************************
ok: [rhel-pod-aut42.kveghdemo.at]
ok: [rhel-pod-aut41.kveghdemo.at]
TASK [kvegh.podman_autodemo.clean_env : clean all nonroot containers] **********
changed: [rhel-pod-aut42.kveghdemo.at]
changed: [rhel-pod-aut41.kveghdemo.at]
TASK [kvegh.podman_autodemo.clean_env : clean all root containers] *************
changed: [rhel-pod-aut41.kveghdemo.at]
changed: [rhel-pod-aut42.kveghdemo.at]
TASK [kvegh.podman_autodemo.clean_env : clean all nonroot containers] **********
^CTerminated
(navigator_venv) [ansible@AAP testenv]$ grep "container engine invocation" ansible-navigator.log | tail -1
211020165954.123 DEBUG 'ansible-runner.wrap_args_for_containerization' container engine invocation: podman run --rm --tty --interactive -v /home/ansible/projects/podman_autodemo/testenv/:/home/ansible/projects/podman_autodemo/testenv/ --workdir /home/ansible/projects/podman_autodemo/testenv -v /home/ansible/:/home/ansible/ -v /home/ansible/projects/podman_autodemo/testenv/inventory/:/home/ansible/projects/podman_autodemo/testenv/inventory/ -v /tmp/ssh-JHIy2UTAtTkH/:/tmp/ssh-JHIy2UTAtTkH/ -e SSH_AUTH_SOCK=/tmp/ssh-JHIy2UTAtTkH/agent.2151411 -v /home/ansible/.ssh/:/home/runner/.ssh/ --group-add=root --ipc=host -v /tmp/ansible-navigator_tqcjp_gu/artifacts/:/runner/artifacts/:Z -v /tmp/ansible-navigator_tqcjp_gu/:/runner/:Z --env-file /tmp/ansible-navigator_tqcjp_gu/artifacts/b768e2a9-c7db-4a18-9718-4b788262d27a/env.list --quiet --name ansible_runner_b768e2a9-c7db-4a18-9718-4b788262d27a quay.io/kvegh/podautee:latest ansible-playbook /home/ansible/projects/podman_autodemo/testenv/usecase3_rootcontainers_networked.yml --vault-password-file /home/ansible/gitignored_secret -e @/home/ansible/vault-auth.yml -u ansible -i /home/ansible/projects/podman_autodemo/testenv/inventory/hosts
(navigator_venv) [ansible@AAP testenv]$
(navigator_venv) [ansible@AAP testenv]$
(navigator_venv) [ansible@AAP testenv]$ podman run --rm --tty --interactive -v /home/ansible/projects/podman_autodemo/testenv/:/home/ansible/projects/podman_autodemo/testenv/ --workdir /home/ansible/projects/podman_autodemo/testenv -v /home/ansible/:/home/ansible/ -v /home/ansible/projects/podman_autodemo/testenv/inventory/:/home/ansible/projects/podman_autodemo/testenv/inventory/ -v /tmp/ssh-JHIy2UTAtTkH/:/tmp/ssh-JHIy2UTAtTkH/ -e SSH_AUTH_SOCK=/tmp/ssh-JHIy2UTAtTkH/agent.2151411 -v /home/ansible/.ssh/:/home/runner/.ssh/ --group-add=root --ipc=host -v /tmp/ansible-navigator_tqcjp_gu/artifacts/:/runner/artifacts/:Z -v /tmp/ansible-navigator_tqcjp_gu/:/runner/:Z --env-file /tmp/ansible-navigator_tqcjp_gu/artifacts/b768e2a9-c7db-4a18-9718-4b788262d27a/env.list --quiet --name ansible_runner_b768e2a9-c7db-4a18-9718-4b788262d27a quay.io/kvegh/podautee:latest ansible-playbook /home/ansible/projects/podman_autodemo/testenv/usecase3_rootcontainers_networked.yml --vault-password-file /home/ansible/gitignored_secret -e @/home/ansible/vault-auth.yml -u ansible -i /home/ansible/projects/podman_autodemo/testenv/inventory/hosts
PLAY [UseCase3 create rootcontainers with network] *******************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************
ok: [rhel-pod-aut41.kveghdemo.at]
ok: [rhel-pod-aut42.kveghdemo.at]
This can be quite handy if Ansible cannot run for some reason, and you need information about why.
[Don't miss the free eBook The automated enterprise.]
8b. ansible-navigator is also very helpful at executing
We used some of those debugging features when ansible-navigator executed Podman and Podman executed ansible-playbook. Still, it couldn't log into the remote endnotes it's supposed to manage, because it couldn't authenticate itself from within the EE container.
This is because we used SSH keys for Ansible to log in, and those keys are not available within the EE because the EE image gets shared. So, how can you use those keys? It turns out that ansible-navigator is pretty cool at that too. If you start an SSH agent and add the private key, it maps it into the runtime environment for Ansible to use:
(navigator_venv) [ansible@AAP testenv]$ eval `ssh-agent -s`
Agent pid 2155429
(navigator_venv) [ansible@AAP testenv]$ ssh-add -d /home/ansible/.ssh/id_rsa
Could not remove identity "/home/ansible/.ssh/id_rsa": agent refused operation
(navigator_venv) [ansible@AAP testenv]$
Also, ansible-navigator bind-mounts the local directory into the runtime environment if you need to add some files at runtime.
9. Open source really is your friend
Halfway through the demo project, we ran into an issue where ansible-navigator failed to display output if we used a custom EE (which we did). We would've needed a support request to get this analyzed and fixed in a strict corporate environment. We learned this is a known issue, and a fix for it is already in the upstream, but it hasn't yet been packaged and productized.
The workaround? Create a new Python virtual environment (venv), and pip install ansible-navigator into it. Boom, the workaround pulled in directly from the upstream, it worked correctly, and we could proceed until the fix finds its way into an AAP2 version.
10. Use custom credentials to access encrypted vault-like data for playbook use
We used AAP 2.x as well as the automation controller to demonstrate running Usecase4. Our use case included accessing container image registries that need to authenticate with sensitive data (such as our credentials). This got solved with a vault and a --vault-password-file
on the command line.
You need custom credentials to use this in automation controller, as Jan-Piet Mens explains straightforwardly.
Learn more
You can watch our presentation after logging into the AnsibleFest website.
We hope these lessons are useful to you. Please contact us (Karoly Vegh and Robert Baumgartner) on LinkedIn if you have thoughts or feedback. We want to thank Chris Jung, Phil Griffith, Eric Lavarde, and Elle Lathram for their contributions to our presentation.
AnsibleFest is a free, Red Hat-sponsored technology conference, an virtual event that brings the entire global automation community together. Visit the AnsibleFest website for on-demand access to the demos, keynotes, labs, and technical sessions that you may have missed.
About the authors
Charlie is your friendly neighborhood Enterprise Linux and Ansible Automation Solution Architect at Red Hat Austria. He focuses on multicloud orchestration and compliance automation and dives into development topics on Linux, too.
Robert is Red hat's de facto go-to solution architect in Austria on containerization, enterprise Kubernetes, and middleware. He has been with Red Hat Austria since the beginning, supporting customers all over the country.
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit