Podman is known for its tight and seamless integration into Linux systems. Staying true to the "containers are Linux" philosophy, we make containerization as easy as possible. A core component of modern Linux systems is systemd
, which is the de-facto standard for managing services and their dependencies. Early on, we realized that a seamless integration of Podman and systemd
is important to our users.
Podman and systemd
We approach seamless integration with systemd
in two ways: Podman running systemd
inside a container, and running Podman inside of systemd
services. Let's look at both use cases.
First, we want Podman to run systemd
inside a container. Running systemd
in a container requires Podman to set up certain mounts required by systemd
. For instance, tmpfs mounts on /run
, /run/lock
, /tmp
, and /var/log/journald
, plus there is some configuration of /sys/fs/cgroup
(depending on whether the system is in cgroup V1 or V2 mode). Podman does this automatically if the entry point of the container is either /usr/sbin/init
or /usr/sbin/systemd
. You can also use the --systemd=always
flag on the command line.
Running systemd
inside Podman containers has been possible for many years, making the lives of users and vendors much easier. Many packages require systemd
to run the service and properly manage its dependencies. Web servers are a prime example. Before the ability to use systemd
in containers, vendors and users were forced to manually work around their standards to distribute and install packages in containers. By using Podman, there is no difference anymore: "Containers are Linux," right? A huge step forward!
The second use case is running Podman inside of systemd
services. For many years, there has been a growing demand for containerized systemd
services. Users want to use systemd
to install, run, and manage their applications using the new paradigm of containerization. In fact, some applications are now exclusively delivered as containers, further increasing the demand.
One of the most common questions from users is: "How do I run a container within a systemd
unit file?"Users are looking for the best practices.
Systemd
needs to know which processes are part of a service so it can manage them, track their health, and properly handle dependencies. Attempts to support such scenarios with Docker have failed. A core problem is the server-client architecture of Docker: It's practically impossible to track container processes, and pull-requests to improve the situation have been rejected.
Podman implements a more traditional architecture by forking processes, such that each container is a descendant process of Podman. This architecture integrates better into modern Linux systems. Features like sd-notify
and socket activation make this integration even more important. The sd-notify
service manager allows a service to notify systemd
that the process is ready to receive connections, and socket activation permits systemd
to launch the containerized process only when a packet arrives from a monitored socket.
Finally, the audit subsystem effectively tracks and records user actions on the system. As mentioned in a blog post by Dan Walsh, auditing containers dramatically improves security and may even be a core requirement to run containers in the first place. Second, the forking architecture of Podman allows systemd
to track processes in a container and hence opens the door for seamless integration of Podman and systemd
.
Auto-generate containerized systemd units
In a previous article, I mentioned that Podman ships with a widely-used feature to generate systemd
units for containers and pods. Migrating a container to a systemd
unit is as simple as executing podman generate systemd $container
. By default, Podman generates a unit that starts and stops an existing container. Those units are tied to a host where the container already exists. If we want to create more portable systemd
units to deploy on other machines, we use podman generate systemd --new
. The --new
flag instructs Podman to generate units that create, start, and remove containers.
Podman 2.0 ships with several noteworthy improvements and enhancements for running Podman in systemd
units:
podman generate systemd
generates more robust services that properly start, even after a system crash.- Podman now supports generating units files with the
--new
flag for pods. Previously, the--new
flag was limited to containers—a major refactoring of the backend allowed for supporting pods. - Improved documentation in the man pages on how to use
podman generate systemd
, how to run and install the generated units as root and as ordinary users, and how to enable the services at system start. - Container units that are part of a pod can now be restarted. Such restarts are especially helpful for auto-updates.
Auto-updates brings us to the next topic.
Podman auto-update
One new use case we have developed in Podman is auto-update. Podman users want to set up a service on a system that will manage its own updates. Imagine you configure a service to run on a container image, and a month later, you add new features to the application in the image, or more importantly, a new CVE is found. You would need to update the image and then recreate the service each node. We want to automate this process so that each service watches for new images to arrive in a container registry. The services automatically update to the latest image and re-create the container. No human interaction required.
Podman 1.9 was the first release to ship with the podman auto-update
command, which allows for updating services when the container image has been updated on the registry. To use auto-updates, containers must be created with --label "io.containers.autoupdate=image"
and run in a systemd
unit generated by podman generate systemd --new
. When running podman auto-update
, Podman first looks up running containers with the "io.containers.autoupdate" label set to "image" and then reaches out to the container registry if the image of the containers has changed. If the image has changed, Podman restarts the corresponding systemd
unit to stop the old container and create a new one with the modified image. This way, the container, its environment, and all dependencies are easily restarted.
Updates are triggered via a systemd
timer or external triggers running podman auto-update
. For more details, please refer to the upstream documentation.
While Podman 2.0 mainly comes with small improvements and bug fixes for auto-updates, we want to encourage users to try out this feature. Auto-updates are still marked as experimental because we want to collect more feedback. We want to meet as many use cases as possible before marking auto-updates as stable.
More updates coming soon
There is so much potential with Podman 2.0 and its systemd
improvements. Go try it out, and feel free to give us feedback and contribute upstream! We can now enjoy all the benefits mentioned in this article. We are already working on further improvements upstream to allow even tighter integration with systemd
and properly reuse the services' cgroups. Furthermore, there is a wonderful community contribution by Joseph Gooch to support the sd-notify
service manager, which greatly simplifies the generated systemd
units and opens the door to more use cases.
[ Getting started with containers? Check out this free course. Deploying containerized applications: A technical overview. ]
Über die Autoren
Daniel Walsh has worked in the computer security field for over 30 years. Dan is a Senior Distinguished Engineer at Red Hat. He joined Red Hat in August 2001. Dan leads the Red Hat Container Engineering team since August 2013, but has been working on container technology for several years.
Dan helped developed sVirt, Secure Virtualization as well as the SELinux Sandbox back in RHEL6 an early desktop container tool. Previously, Dan worked Netect/Bindview's on Vulnerability Assessment Products and at Digital Equipment Corporation working on the Athena Project, AltaVista Firewall/Tunnel (VPN) Products. Dan has a BA in Mathematics from the College of the Holy Cross and a MS in Computer Science from Worcester Polytechnic Institute.
Mehr davon
Nach Thema durchsuchen
Automatisierung
Das Neueste zum Thema IT-Automatisierung für Technologien, Teams und Umgebungen
Künstliche Intelligenz
Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen
Open Hybrid Cloud
Erfahren Sie, wie wir eine flexiblere Zukunft mit Hybrid Clouds schaffen.
Sicherheit
Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren
Edge Computing
Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen
Infrastruktur
Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen
Anwendungen
Entdecken Sie unsere Lösungen für komplexe Herausforderungen bei Anwendungen
Original Shows
Interessantes von den Experten, die die Technologien in Unternehmen mitgestalten
Produkte
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud-Services
- Alle Produkte anzeigen
Tools
- Training & Zertifizierung
- Eigenes Konto
- Kundensupport
- Für Entwickler
- Partner finden
- Red Hat Ecosystem Catalog
- Mehrwert von Red Hat berechnen
- Dokumentation
Testen, kaufen und verkaufen
Kommunizieren
Über Red Hat
Als weltweit größter Anbieter von Open-Source-Software-Lösungen für Unternehmen stellen wir Linux-, Cloud-, Container- und Kubernetes-Technologien bereit. Wir bieten robuste Lösungen, die es Unternehmen erleichtern, plattform- und umgebungsübergreifend zu arbeiten – vom Rechenzentrum bis zum Netzwerkrand.
Wählen Sie eine Sprache
Red Hat legal and privacy links
- Über Red Hat
- Jobs bei Red Hat
- Veranstaltungen
- Standorte
- Red Hat kontaktieren
- Red Hat Blog
- Diversität, Gleichberechtigung und Inklusion
- Cool Stuff Store
- Red Hat Summit