I often hear the question: "How do I manage containers with RHEL?" I think there's a knowledge gap here because many users never took
docker run [image] beyond a laptop and into production. This is a good thing for many environments, as using an orchestration platform like the industry default Kubernetes provides almost endless capabilities and benefits.
The reality in many edge environments is that being lightweight, secure, and reliable is more important than having the comprehensive capabilities Kubernetes provides. Operationally though, it can be tricky to think about scaling
docker run across a massive number of systems. An obvious answer is maintaining various scripts to perform everyday actions; however, this is almost antithetical to where the industry is going. Fortunately, there are far more efficient ways to do things using only the basic features of Red Hat Enterprise Linux (RHEL).
How Linux can manage containers
If you can ensure you're always running the desired containers with the correct configurations across a fleet of edge systems with minimal overhead, you can solve many use cases. Fortunately, RHEL provides everything you need to accomplish this using a combination of two known components in the Linux space: Podman and systemd.
[ Get the systemd cheat sheet. ]
- Containers running on edge devices don't always require Kubernetes.
- Podman provides advanced container management, including:
- Creating pods
- Enabling automatic updates
- Managing containerized applications as services with systemd
- Kubernetes and MicroShift are great tools, but Podman may be sufficient for certain use cases.
First, I'll define managing containers and establish a baseline, as this can be a bit of a loaded term. For the sake of simplicity, I'll describe some common actions that all containers need in production: start, auto-start, auto-restart, runtime health checks, and updating.
About Podman, systemd, and containers
Podman is a powerful container engine that provides everything needed to run containers and pods (groups of containers). Its roots derive from the Docker project, and it strives for compatibility and a simpler internal architecture. Podman provides a familiar command-line interface (CLI), a very capable API, and the ability to create and interpret Kubernetes YAML files. Also, when using Podman with RHEL 9, the system will default to using cgroup2, which is really the only way to do secure delegation of Linux control groups. This is a very good security posture to have out of the box.
[ Download now: Podman basics cheat sheet. ]
The other system component I'll discuss is systemd, which is responsible for managing the userspace on modern Linux systems. The project started as a replacement init system, and it gained functionality to become more of a manager for "the system." My favorite thing about systemd is that it provides a declarative abstraction for describing how services should run on Linux. If you thought, "Hey, that sounds a lot like what Kubernetes YAML gives us," you're right! It's pretty neat when you think about it that way.
Here's the best part about getting containers to production on Linux hosts: All the integration and plumbing you need between Podman and systemd already exists, and it's mature. It's also the most lightweight approach that's practical for today's hardware.
The integration between container runtimes and systemd was one of the limiting factors administrators ran into in the early days of the Docker project. Now that Podman has delivered this, it's expanded the number of use cases where containers can run without an orchestration system.
[ Getting started with containers? Check out this no-cost course: Deploying containerized applications. ]
Essentially, you can use Podman for the container instantiation and rely on systemd to be the arbiter to ensure everything is running correctly. Both components are doing what they do best and in the spirit of the Unix philosophy. What makes this even better is that Podman can generate systemd unit files for you, similar to how it can create Kubernetes YAML. If you've ever used static pods on Kubernetes, this will feel similar but with some advantages.
Single container example
Now that you know what these components are, here's an example of how this works with a single container.
First, get a simple container running on a system:
# podman run --name app1 --rm \ --label "io.containers.autoupdate=registry" \ --sdnotify=container \ -p 8081:80 \ docker.io/nginx
There are two options in this command that help "manage" app1:
--label "io.containers.autoupdate=registry"option watches the registry and automatically updates the container when a new version appears with the same tag.
--sdnotify=containeroption uses the native interface for systemd's message bus, as the goal is to leverage systemd to control start, stop, and restart actions. This also enhances the auto-update feature, and if a new container fails to run, Podman will automatically roll back to the last working copy with no user intervention. Amazing!
Next, have Podman do the heavy lifting to create the unit file:
# podman generate systemd --new -n -f --start-timeout 600 app1
--new option creates a new container from the image with every start. This also makes the unit file completely portable between systems. It's the only file that needs to be in place on the system to run the container. The
-n option uses the container name for the unit file, for example,
-f option creates a file instead of using stdout. The
--start-timeout 600 option adds a longer start time, for example, 10 minutes, ensuring the system has enough time to pull the image on the initial startup. You don't always need to change this from the default, but doing this can help avoid a separate step to pull the image.
From here, you just need to copy
/etc/systemd/system. For an unprivileged user, you can use
$HOME/.config/systemd/user. Once the unit file is in place, you can run
systemctl daemon-reload to make systemd aware of it.
To start and enable the container to run at startup, run
systemctl enable --now container-app1.service. The only other thing you need to do is enable the
podman-auto-update.timer. Think of this as the "cron" schedule for when the system checks updates for all containers with the
io.containers.autoupdate label. Set the systemd timer for an appropriate time for your environment. If you want the system to check on Saturdays at 1 a.m., simply change
OnCalendar=Sat *-*-* 01:00:00. Another option worth calling out is
RandomizedDelaySec=900. For environments with thousands (or even millions) of devices, it's a good idea to randomize check-ins like this to help limit peak load on a registry infrastructure.
Multi-container pod example
Unit files can work for single containers as well as pods of containers. Using systemd to manage pods provides a much more robust foundation to run more complicated applications. I find this is a great alternative to
docker-compose use cases.
Create a simple pod using the generic guestbook application:
# podman pod create --name guestbook -p 8080:80 # podman run -d --rm \ --name guestbook-backend \ --pod guestbook \ --label io.containers.autoupdate=image \ --sdnotify=container \ docker.io/redis:latest # podman run -d --rm \ --name guestbook-frontend \ --label io.containers.autoupdate=image \ --sdnotify=container \ --pod guestbook \ -e GET_HOSTS_FROM="env" \ -e REDIS_SLAVE_SERVICE_HOST="localhost:6379" \ gcr.io/google_samples/gb-frontend:v6 # podman generate systemd --new -n -f --start-timeout 600 guestbook The following unit files are created: ./pod-guestbook.service ./container-guestbook-frontend.service ./container-guestbook-backend.service
Now copy all three unit files to your systems, but you only need to enable and start/enable the
pod-guestbook.service, and systemd will take care of the rest.
[ Want to test your sysadmin skills? Take a skills assessment today. ]
Scale across systems
At this point, you've seen how easy it is to define and run applications, whether made of a single container or groups of containers, using only simple operating system components. One thing I really like about this approach is that it minimizes the number of components in the stack. You also don't have to worry about error-prone activities like regenerating certificates or balancing more platform life cycles in the field.
Now that you understand how the pieces fit together, you can scale this across many systems. Perhaps the simplest way is to embed these unit files into operating system images. This way, when the node boots up, it begins by pulling the containers, and all is good.
However, this doesn't help with day-two changes. This is where you can leverage several technologies to fit the environment and use case. One widely deployed option to scale, automate, and manage edge infrastructure is Ansible. As there are so many things to automate in remote environments, why not use the same automation technology you're probably already using to scale your application deployments?
[ Get started with IT automation with the Ansible Automation Platform beginner's guide. ]
Ansible is easily capable of delivering everything described above. A new Linux system role for Podman (under development) will further simplify deployments like this and make generating and dropping these unit files across a fleet of systems easy. This role is still under development upstream and is not currently supported or included in RHEL. The team welcomes any feedback you have on it, so please open an issue on the GitHub repository with feedback.
Alternatively, several available community projects have some compelling options. Two worth checking out are Fetchit and Flotta. They differ in scope and technology but provide an excellent day-two experience for administrators.
Regardless of how you scale this practice, using systemd and Podman remains the most lightweight way to run containers on Linux. One of our customers is doing this with over 40 applications per node today and loves the results. What if you read this and think, "What about MicroShift? Isn't that a lightweight edge platform?" Yes, you are correct. For systems that benefit from having a Kubernetes API, MicroShift will help bring that to smaller devices. At the time of writing, it requires approximately 1GB of memory to run the platform and API. That's pretty amazing, considering all that is involved.
Suppose you are deciding between MicroShift or just using Linux. You can ask yourself two questions: 1) Do I need or benefit from having the Kubernetes API running directly on these devices? 2) Do I have the hardware resources? Of course, there will likely be other considerations for your environment, but hopefully, this will help you navigate that choice.
From a technology perspective, we live in an amazing time, which opens up many possibilities for converting traditionally static Linux systems at the edge to much more adaptable and valuable systems.