Subscribe to the feed

Image mode for Red Hat Enterprise Linux (RHEL) uses the same tools, skills, and patterns as containerized applications to deliver an operating system that is easier to build, ship and run. This post will cover the concepts behind image mode and help introduce users to foundational concepts required to package operating systems in Open Container Initiative (OCI) container images.

The below steps and processes will help you gain a better understanding of the concepts behind image mode for RHEL from a hands on perspective by building and deploying a custom image.


  1. All commands are run on a subscribed RHEL 9.x system (laptop, virtual machine (VM), etc., will work) and a minimum of 10 GB of available disk space. Keep in mind that more disk space may be required depending on the size and quantity of images being created.
  2. A Red Hat account with either production or developer subscriptions (no-cost developer subscriptions are available here).

A Container registry - this example will use as the registry content is published to, but you may choose to utilize another hosted registry service or run a registry locally. A account can be created quickly and easily here.

Getting started

Begin by confirming that your system is subscribed to get RHEL content.

$ sudo subscription-manager register

Next, we’ll install Podman. We recommend using the latest version available, but anything v4.* or newer will work. Note that other container tools, such as Docker or container pipeline tooling, can work in a production environment. This example will teach the concepts using Podman, but just remember other tools may be more relevant to your environment.

$ sudo dnf -y install podman

Now, it’s time to authenticate via Start by visiting and click “New service account”. From here click the name of the new entry and copy/paste the “docker login” instructions in the terminal, replacing the docker command with podman. Full instructions are here if more information is needed. Converting the container image to a disk image with image builder will require elevated privileges with podman. Go ahead and authenticate to the registry with and without sudo.

$ podman login
#repeat with sudo
$ sudo podman login

Bootc container images differ technically from application containers in two important ways:

  • bootc images use OSTree inside the container
  • A bootc image has a kernel and enough other packages to boot a physical or virtual machine.

Application container base images typically contain a minimal set of packages that are unrelated to hardware management. And, unlike the Red Hat Universal Base Image (UBI), RHEL bootc images are distributed under the same licensing terms as RHEL.

Let’s pull the rhel-bootc base image.

$ podman pull

Create a Containerfile

Now, let’s look at an example Containerfile. You may know these as Dockerfiles. We’re going to start simple and install a lamp stack. Save the below text with a new file called Containerfile:


#install the lamp components
RUN dnf module enable -y php:8.2 nginx:1.22 && dnf install -y httpd mariadb mariadb-server php-fpm php-mysqlnd && dnf clean all

#start the services automatically on boot
RUN systemctl enable httpd mariadb php-fpm

#create an awe inspiring home page!
RUN echo '<h1 style="text-align:center;">Welcome to image mode for RHEL</h1> <?php phpinfo(); ?>' >> /var/www/html/index.php

Now, we have described a simple operating system that will run a webserver on port 80 along with having a database and php available as well. Let’s build the container image:

Build an Image

$ podman build -f Containerfile -t[my_account]/lamp-bootc:latest


-t will tag the image. This example assumes that is the registry being used. Please adjust for whichever registry you are using.

-f will instruct Podman to use our Containerfile.

Test the Image

Now that we have our image, let’s test it quickly. Since our image is a container, it’s fast to run and we can verify if we have any typos as an error will be emitted. We’ll give it the short name (lamp) for simplicity:

$ podman run -d --rm --name lamp -p 8080:80[my_account]/lamp-bootc:latest /sbin/init

The container will start, and there is no need to worry about logging in right now. Open a browser and verify that you can view the webpage being served at http://[your_ip_address]:8080. If the page doesn’t load, double check your firewall rules. If you’re using a local system, the loopback address should work fine. In this example, we’re starting systemd. However for many testing scenarios, it will be more efficient to simply start an application. Turning around fast testing and validations is one of the most profound things about using containers to define operating system images.

You can shell into the running container instance with podman exec, using the name we set above.

$ podman exec -it lamp /bin/bash

Stop the instance using the same name:

$ podman stop lamp

Push to a registry

Next, authenticate by logging in to, push the image to the registry, and configure the repository to be publicly accessible.

$ podman login
$ podman push[my_account]/lamp-bootc:latest

At this point, we have created a layered image that we can deploy and there are several ways that it can be installed to a host: We can use RHEL’s installer and kickstart a bare metal system (deploy via USB, PXE, etc) or we can use image builder to convert the container image to a bootable image. Note that once this container is “installed”, future updates will apply directly from the container registry as they are published. So, the installation process only happens once.

Deploying via KVM/QEMU with a Qcow2 disk image

This example will use iImage builder to convert the container image into a qcow2 formatted disk. Our example will assume the image is in a publicly accessible repository. Please refer to the image builder documentation on how to utilize an image from a private repository. Other image formats, aside from qcow2, are also available.

First, create a config.json file that enables the configuration of the resulting disk. For this example, the config.json will include the user(s) you wish to create. Please paste in your own SSH key and password into this example.

 "blueprint": {
   "customizations": {
     "user": [
         "name": "cloud-user",
         "password": "changeme",
         "key": "ssh-rsa AAAAB3Nz..........",
         "groups": [

Next, pass the config.json along with our lamp container to image builder:

$ sudo podman run --rm -it --privileged \
-v .:/output \
-v $(pwd)/config.json:/config.json \
--pull newer \ \
--type qcow2 \
--config /config.json \[my_account]/lamp-bootc:latest

Once the image is ready, we can run it using libvirt (or qemu directly)

virt-install \
 --name lamp-bootc \
 --memory 4096 \
 --vcpus 2 \
 --disk qcow2/disk.qcow2 \
 --import \
 --os-variant rhel9.4

With the VM running, you should be able to verify that the site is running by viewing  http://[your_instance_ip_address] in a browser.

Deploying to AWS with an AMI disk image

For this example, we’ll need to confirm cloud-init is available in our lamp Containerfile previously created. This is where the container workflow helps us, and we can easily create a layered image for our use case. We’ll demonstrate a layered build, but feel free to edit the original Containerfile to include cloud-init if that’s easier.

#install cloud-init for AWS
RUN dnf install -y cloud-init && dnf clean all

Build and push the image:

$ podman build -f Containerfile -t[my_account]/lamp-bootc-aws:latest
$ podman push[my_account]/lamp-bootc-aws:latest

We are going to rely on cloud-init for injecting users and ssh keys, which will allow us to skip the config.json step from the KVM example above. (creating a cloud-init config is outside of the scope of this document). By using cloud-init, we increase the security posture by avoiding having hardcoded credentials included in our image. Next, run image builder to create our AMI:

$ sudo podman run --rm -it --privileged \
 --pull=newer \
 --security-opt label=type:unconfined_t \
 -v $XDG_RUNTIME_DIR/containers/auth.json:/run/containers/0/auth.json \
 -v $HOME/.aws:/root/.aws:ro \
 --env AWS_PROFILE=default \ \
 --type ami \
 --aws-ami-name lamp-bootc-aws \
 --aws-bucket bootc-bucket \
 --aws-region us-east-1 \[my_account]/lamp-cloud-init-bootc:latest

Additional options are available to configure the properties for AWS. Please see this link for more details.

After the publishing process completes successfully, start your image and prepare to be amazed by viewing http://[your_instance_ip_address] in a browser.

Installing to Bare Metal via Kickstart

As you’ve seen, there are several ways to install our container. This section covers the use of kickstart, which is very popular for bare metal deployments using either ISO, PXE, or USB drives. Some familiarity with kickstart concepts is assumed, as this guide does not go into detail. Insert details related to users, passwords, and ssh keys in the example below. Adding additional options is supported, but be aware that the %packages section is not viable using this workflow as we’re replacing the instance with the container image. Please download the 9.4 Boot ISO for your architecture from this site.

network --bootproto=dhcp --device=link --activate
# Basic partitioning
clearpart --all --initlabel --disklabel=gpt
reqpart --add-boot
part / --grow --fstype xfs
# Here's where we reference the container image to install - notice the kickstart
# has no `%packages` section!  What's being installed here is a container image.
ostreecontainer --url[my_account]/lamp-bootc:latest
firewall --disabled
services --enabled=sshd
# optionally add a user
user --name=cloud-user --groups=wheel --plaintext --password=changemme
sshkey --username cloud-user "ssh-ed25519 AAAAC3Nza....."
# if desired, inject a SSH key for root
rootpw --iscrypted locked
sshkey --username root "ssh-ed25519 AAAAC3Nza....." #paste your ssh key here

Copy this config file to a web server, update the password and SSH key, boot any physical or virtual system using the installation media, and append the following to the kernel arguments:


Press ctrl-x to boot using this option.

If an HTTP server is not readily available, you can use the http server module available in most python installations. In the directory holding the kickstart file run:

$ python -m http.server

Another approach, which does not utilize an HTTP server to host the kickstart file, injects the kickstart into the installer ISO. The lorax package includes a utility called mkksiso which can embed this file in an ISO. This is useful for booting directly from a thumb drive and avoids  editing the boot menu. Run: mkksiso --ks /PATH/TO/KICKSTART /PATH/TO/ISO /PATH/TO/NEW-ISO

Pushing an update

A key aspect of this story is that the install or deploy is a one-time task. A lot of the value of this model happens on “day 2” when changes can be done via pushing images to the registry. Automatic updates are on by default! This is, of course, simple to configure for maintenance windows or can be disabled completely. To try it out, make a change to your Containerfile and repeat the build & push steps to make the new image available on your registry.  The default timer for the systemd unit will kick in after an hour of uptime, but you can also `bootc upgrade` earlier than that to grab the update.

Next Steps

Now that you’ve walked through a simple example with image mode for RHEL, we recommend exploring some of your own use cases and consider the possibilities and operational efficiencies that can be gained when using container tools to version and manage operating system deployments. Be sure to have a look at our bootc examples repo that will help enable a number of platforms and useful scenarios. We also encourage you to check out the full documentation when you’re ready to go deeper. 

About the author

Ben Breard is a Senior Principal Product Manager at Red Hat, focusing on Red Hat Enterprise Linux and Edge Offerings.

Read full bio

Browse by channel

automation icon


The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon


The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon


The latest on the world’s leading enterprise Linux platform

application development icon


Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech