Subscribe to our blog

Reproducibility and consistency are two key traits that are the driving forces behind the popularity of containers. Consistency is also a principle found within Red Hat Ansible Automation Platform. With only a few lines of YAML-formatted manifests, thousands of instances can be set up and configured uniformly. While the management of target instances is simple, it is the control node, or where the Ansible execution is initiated, that can  be the most challenging aspect. As Ansible is written in Python, does the machine have the correct version of Python installed?  Are the necessary Python modules installed? Are there any operating system dependencies needed? The list goes on and on.

These concerns, along with many others, led to a desire to leverage the benefits of containers to perform the control node’s role and eventually ushered in the concept of automation execution environments in Ansible Automation Platform 2. Running Ansible within containers is not a new concept and has been used quite successfully for some time now. However, there was no consistent process for building the container or executing Ansible from within the container. It seemed like everyone and anyone had their own version of running Ansible in a container. Ansible Automation Platform 2 includes a command line tool called ansible-builder, which simplifies the creation of automation execution environments by allowing users to define the composition of it, such as Ansible Content Collections or packages along with their dependencies, in a standardized fashion.

While this may sound great, many of the same issues that execution environments attempt to solve in the Ansible control node are present when building an execution environment itself. Instead of producing execution environments on individual end user machines, wouldn’t it be great if this capability for building execution environments was available for anyone to consume in a uniform fashion? 

The rise of containers and the desire to manage them at scale led to the popularity of Kubernetes. While Kubernetes did not have a method for building containers within the platform, OpenShift Container Platform has included this capability from the beginning and offered several different methods for building container images, such as:

  1. Using a Dockerfile.
  2. Source-to-Image.
  3. Extending the content from an existing image.
  4. Custom.

Even though a custom option for building container content was available, it was a complex process and required deep knowledge of the underlying build system in order to properly make use of the feature. As a result, the popularity of this feature has been minimal. 

Today, the options for building containers continues to grow by the day and are no longer limited to Docker flavored Dockerfile builds, or leveraging a contract based container build tool, such as Source-to-Image. Not only is there an increased number of tools built for producing containers, but there are also several programming languages and frameworks with the capacity for producing container content. To provide support for producing containers in a Kubernetes environment, the Shipwright project was created. It enables the use of many of the most popular container build tools available and provides a method for easily incorporating new build strategies. (Note: Shipwright is the upstream project of OpenShift Builds v2). Given that ansible-builder is just another tool for producing containers, the ability to incorporate it within the Shipwright ecosystem will streamline how execution environments are built.

Shipwright leans upon many of the components in the Tekton ecosystem (Red Hat OpenShift Pipelines) for orchestrating container builds. The following are the high level components found within Shipwright:

Several ClusterBuildStrategy’s are provided by Shipwright include the aforementioned source-to-image and provide support for other image build tools including  Kaniko and Cloud Native Buildpacks. Adding support for additional build tools, like ansible-builder, involves creating a new ClusterBuildStrategy that will include all of the necessary steps to produce and distribute the container image.

Project repository

The associated assets related to this write-up can be found in the following GitHub repository: 

Clone the contents of this repository to your machine:

git clone
cd ansible-builder-shipwright

The contents of the repository include assets that can be used to not only install Shipwright, but demonstrate how it can be used to build an execution environment.

Deployment options

The contents of the repository include the steps necessary to perform the installation and configuration of Shipwright to support producing execution environments as well as a sample execution environment that is produced using this approach either as manual invocations of command line utilities or automated using Ansible.

To start, let’s illustrate how to accomplish these tasks manually and then introduce how the power of Ansible modules can be instead utilized to perform the same actions.

Installing Shipwright

The first step towards being able to leverage Shipwright to act as the facilitator for building execution environments is to install it into a Kubernetes environment. 

Shipwright is available as an operator in along with OperatorHub for installation in OpenShift. Execute the following command to deploy the operator to the cluster:

kubectl apply -f resources/operator/olm

Confirm the successful installation of the operator by checking the state of the ShipwrightBuild CustomResourceDefinition.

kubectl wait --for condition=established crd/

Next, create a new namespace called shipwright-build and add the ShipwrightBuild custom resource, which will deploy the Shipwright build controller.

kubectl apply -f resources/operator/instance

Confirm the controller is running in the shipwright-build namespace:

kubectl get pods -n shipwright-build	

NAME                              READY   STATUS    RESTARTS   AGE
shipwright-build-controller-54f5f975c6-v8st8   1/1     Running   0          25s

Alternatively, instead of using the operator based approach, the raw manifests can be installed from the upstream project repository.

Integrating ansible-builder and Shipwright

Now that Shipwright has been deployed to the cluster, let's demonstrate how ansible-builder can be integrated into the Shipwright ecosystem. 

Open the resources/clusterbuildstrategy/ansible-builder-clusterbuildstrategy.yml file containing the ClusterBuildStrategy and notice the two primary functions of this resource contained within the buildSteps property:

  1. Producing a container image.
  2. Pushing the resulting image to an image registry.

Each of these build steps make use of a container image that has the necessary tools and configurations included in order to execute each of the actions defined. To produce an execution environment that is compatible with running on OpenShift, Buildah is included within the image as it facilitates not only producing the image, but also publishing it to the desired image registry. 

While browsing through the ansible-builder-clusterbuildstrategy.yml  file, you may also notice a number of steps being performed within each build step. First, ansible-builder is used to create a build context from an execution-environment.yml file. Then a container image is created using buildah utility. You may notice several parameterized fields within each build step, such as $(build.source.contextDir) and $(build.output.image) as shown below.

    - args:
        - -c
        - |
          set -e
          # Change into directory containing EE
          cd /workspace/source/$(build.source.contextDir)

          # Create Ansible Build Context
          /usr/bin/ansible-builder create -c $HOME/context

          # Build the EE
          /usr/bin/buildah --storage-driver=vfs build -t $(build.output.image) $HOME/context

These are properties that are defined in the Build resource which are substituted at runtime. We will show how these values can be defined momentarily. 

Add the ansible-builder ClusterBuildStrategy by executing the following command:

kubectl create -f resources/clusterbuildstrategy/ansible-builder-clusterbuildstrategy.yml

With the logic to produce an execution environment available within the cluster, let's demonstrate how it can be used. Within the repository, the example-builder directory contains assets to produce a sample execution environment. The ansible-builder tool requires that a YAML based file called execution-environment.yml be present and defines all of the configurations necessary to produce the image. These include system packages in bindep format, python packages, as well as any Ansible Content Collection. A full overview of the configuration options available can be found within the ansible-builder documentation

To provide a location within the cluster to produce execute the build, create a new namespace called ansible-builder-shipwright:

kubectl create namespace ansible-builder-shipwright
kubectl config set-context --current --namespace=ansible-builder-shipwright

Now, create a new build resource that will define the source code retrieved as part of the build process, the location of where to store the image once the build process is complete, as well as the name of the Shipwright build strategy. This can be found in the example/ansible-builder-build.yml file within the repository and shown below:

kind: Build
  name: ansible-builder-example
    contextDir: example-builder
    revision: main
    image: image-registry.openshift-image-registry.svc:5000/ansible-builder-shipwright/ansible-builder-shipwright-example-ee:latest
    name: ansible-builder
    kind: ClusterBuildStrategy

To simplify the setup and remove dependencies on external components, such as image registries, OpenShift will be used in this example as it contains an integrated image registry within the platform. You are free to modify the destination image registry to the location of your choosing, such as Additional steps will be required to configure the necessary credentials in order to authenticate with the registry.

Add the build to the cluster by executing the following command:

kubectl create -f example/ansible-builder-build.yml

Confirm the build resource was registered successfully:

kubectl get ansible-builder-example

ansible-builder-example   True         Succeeded   ClusterBuildStrategy   ansible-builder     67s

The actual execution of a build is facilitated by a BuildRun resource which connects a build and any other runtime references, such as the name of the service account that should be used. Since the images that are used to produce the execution environments as part of ansible-builder are located in the Red Hat Ecosystem Catalog, credentials must be provided to access the content. Similar to accessing any container image in Kubernetes hosted in a non-public repository, credentials are stored in a secret. 

Create a new secret called ansible-ee-images to store the credentials to access the Ecosystem Catalog.

kubectl create secret docker-registry ansible-ee-images --docker-username= --docker-password=

OpenShift Pipelines inject a service account called pipeline to perform the steps necessary to execute the build. In order to promote the principle of least privilege, a separate dedicated service account for the purpose of producing execution environments will be used. 

Execute the following command to create a service account called ansible-builder-shipwright:

kubectl apply -f resources/policies/serviceaccount.yml

Associate the ansible-ee-images secret containing the registry credentials to the ansible-builder-shipwright service account by patching the service account.

kubectl patch serviceaccount ansible-builder-shipwright  -p '{"secrets": [{"name": "ansible-ee-images"}]}'

As of this publication, certain components within the Shipwright framework require containers to be executed using the ID of the user defined. As a result, the ansible-builder-shipwright service account must be granted access to the anyuid SecurityContextConstraint (SCC). 

Execute the following command grant the service account access to the anyuid SCC by creating a RoleBinding called system:openshift:scc:anyuid:

kubectl apply -f resources/policies/anyuid-scc-rolebinding.yml

Finally, once Shipwright completes assembling the image, it must be pushed to a container registry for storage. For simplicity, OpenShift’s internal registry will be used in this situation. However, any container registry can act as a destination for a build. The service account being used to run the build must be also granted access to push the image to the internal registry.

Execute the following command to grand this access:

kubectl apply -f resources/policies/image-builder-rolebinding.yml

When a BuildRun is created, the Shipwright build-operator constructs a Tekton TaskRun dynamically based on a combination of the associated build strategy and build.

Start the build process by creating a BuildRun by executing the following command:

kubectl create -f example/ansible-builder-buildrun.yml

Confirm the Tekton TaskRun has been created by the build operator:

kubectl get taskrun

NAME                                  SUCCEEDED   REASON    STARTTIME   COMPLETIONTIME
ansible-builder-example-gkf9l-xlx6l   Unknown     Running   1m41s 

If you have the Tekton (tkn) CLI available on your machine, you can track the progress of the build by viewing the logs:

tkn taskrun logs -f -L

Once the build has successfully completed, you can confirm that the image containing the execution environment was pushed to the integrated image registry by viewing the ImageStreams within the namespace:

kubectl get imagestream

NAME                                    IMAGE REPOSITORY                                                                                                    TAGS     UPDATED
ansible-builder-shipwright-example-ee   image-registry.openshift-image-registry.svc:5000/ansible-builder-shipwright/ansible-builder-shipwright-example-ee   latest   2 minutes ago

Ansible based provisioning

Alternatively, Ansible can be used to perform the same actions performed in the prior sections in an automated fashion. The use of Ansible not only reduces the potential for errors or missing actions, but streamlines the entire provisioning process.

When using an Ansible based approach, the assets can be invoked using the standalone ansible-playbook command or with ansible-navigator. When using the standalone Ansible approach, first ensure that your machine has the required dependencies by running the following command: 

ansible-galaxy collection install -r ansible/requirements.yml

With the necessary dependencies installed, execute the playbook. Similar to the previous approach, the username and password for the Ecosystem Catalog must be specified. These can be provided using the container_registry_username and container_registry_password extra variables as shown below.

ansible-playbook ansible/playbooks/setup.yml -e container_registry_username="<username>" -e container_registry_password="<password>"

Once the setup playbook completes, the Shipwright operator will be installed within the cluster as well as the supporting components necessary to produce execution environments.

The same automation can be achieved using ansible-navigator by executing the following command:

ansible-navigator run ansible/playbooks/setup.yml --mode=stdout --eev ~/.kube/config:/home/runner/.kube/config -e container_registry_username="<username>" -e container_registry_password="<password>"

With the environment configured using either automation approach, the final step is to trigger a build of the example execution environment. Use the following command to trigger the build process using the ansible-playbook based approach:

ansible-playbook ansible/playbooks/build-ee.yml

Or, to use ansible-navigator to accomplish the build task, execute the following command:

ansible-navigator run ansible/playbooks/build-ee.yml --mode=stdout --eev ~/.kube/config:/home/runner/.kube/config

Regardless of the approach used to deliver the automation, the resulting action is a newly produced execution environment available within the integrated image registry, which can be confirmed with the following command.

kubectl get -n ansible-builder-shipwright imagestream

Execution environments in action

Using either the imperative or automated based approach, an execution environment has been published to an image registry. While the integrated registry was used in this use case, any image registry, such as or private automation hub, can be used as the destination. The only modification necessary to accomplish this task would need to occur in the Shipwright Build resource where the output location is specified.

To learn more about how private automation hub can be used to store execution environments, refer to this article.

With an execution environment available within an image registry, it can then be used to deliver automation either on a local machine using ansible-navigator or within automation controller.

Execution environments provide a way of operating Ansible automation by running Ansible within an OCI compliant container runtime. By using the Shipwright project, you can quickly and easily produce these types of environments with ease within a Kubernetes environment.

Next Steps

About the author

Andrew Block is a Distinguished Architect at Red Hat, specializing in cloud technologies, enterprise integration and automation.

Read full bio

Browse by channel

automation icon


The latest on IT automation that spans tech, teams, and environments

AI icon

Artificial intelligence

Explore the platforms and partners building a faster path for AI

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon


Explore how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the solutions that simplify infrastructure at the edge

Infrastructure icon


Stay up to date on the world’s leading enterprise Linux platform

application development icon


The latest on our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech