A decade ago, containers were a novelty, but today containers are the indispensable driving force behind scalable and automated cloud architecture. Thanks to technologies like Kubernetes and Red Hat OpenShift, the advantages of containers have become abundantly clear to the IT community.
Early container engines worked well enough and provided a good starting point for container adoption but after a decade in production, it’s time to treat containers as a default and integrated technology. Podman is a modern container engine for modern applications and cloud architecture. Here are 5 reasons you should look at Podman for your servers.
1. There is no Podman daemon
Server admins don’t generally like to run a service in the background unnecessarily. A background service, also called a daemon, is just one more thing for the CPU to manage and monitor, so it’s nice when you can make a service available on your system without running it constantly in the background.
Some container engines require multiple daemons, even when no container is actively running. Podman does not. When you start a container in Podman, it runs essentially like an application. There’s no Podman daemon required to provide access to the container, or to keep it running. Once a container is running, Podman essentially disappears, using none of your system resources.
Podman by default uses fork and exec, which means that the container process runs within the same ancestors that the Podman process runs in. From a security point of view, this means that the container processes have the same access or less than the parent process that launched the container. It also inherits the resource constraints of the parent. Finally this means that systemd can track the process and interact with it the same way it does for other processes and services that run on the system. Advanced features like socket activation and sd notify work.
When running containers under daemons, the container inherits the constraints on the daemon, not the client process. Neither systemd or the kernel have any knowledge of which client started the container. They have knowledge only of the daemon. A cgroup constraint on the daemon applies to the container, not the client's cgroups. Many users of client server operations have no idea of how privileged the daemon process is, especially when the daemon is running as root.
2. Podman doesn’t require root access
The “root” user on a system is the ultimate administrator of that system. The root account has unmitigated access to literally everything. That’s important for system maintenance, but it’s best when limited. You don’t want to run applications as root processes unless they require root access to your system, and you don’t want to enable root access for physical users when it’s not necessary.
By avoiding unnecessary root processes, you help protect your system from malicious code and users, and you help protect your users from potentially disastrous accidents (nobody should have to live in fear of bringing a cluster down with just one wrong command).
That’s why Podman, unlike other container engines, doesn’t run as root by default. To run on a privileged port (that is, lower than 1024), you must escalate to root, but a normal user can safely use Podman to run containers without so much as the sudo command.
3. Podman is versatile
When you run Podman on Red Hat Enterprise Linux (RHEL) or Fedora, you can use Cockpit to manage your Podman containers. If you work in an environment that has no access to the internet, then you can make your container images available through your own Red Hat Satellite infrastructure.
4. Podman is integrated
Containers are Linux, regardless of whether you’re running them on Windows, Mac, or Linux. When you choose to run Podman on Linux though, you get full system integration. You can enlist features of the Linux operating system, like systemd, to manage and monitor your Podman containers.
With the Quadlet feature, you can run containers with systemd as easily as you would with Compose or Kubernetes. You declare what you want to run without having to deal with all the complexities of running the workload. You can define a complex application in Kube YAML, and then run the same application with Podman as you would on Red Hat OpenShift.
5. Podman desktop
Containers used to be a tool for systems administrators, but they’ve since been adopted by developers and desktop users. Whether you’re an admin, developer, or just a user who loves to try out new applications and services, you may or may not be comfortable with a Linux terminal. The good news is that you don’t have to open a terminal at all to run Podman, thanks to Podman Desktop.
The Podman Desktop application allows you to create containers from custom or repository images, provides access to Kind, kubectl, Compose and much more through extensions. Of course it integrates with systemd on Linux systems, too. With Podman Desktop, you can create, use, monitor and destroy containers through a feature-rich dashboard, whether you need the containers to test infrastructure, to run infrastructure, to develop applications, to run RHEL AI, or just to try out Podman to quickly compare it to other container solutions.
Containers are a native technology
Modern computing is all but based on container technology. It’s time to treat containers like a native technology, and to take advantage of the integrations and automation features available from your operating system. Whether you’re new to containers or just new to Podman, try Podman today with this no-cost lab.
À propos de l'auteur
Seth Kenlon is a Linux geek, open source enthusiast, free culture advocate, and tabletop gamer. Between gigs in the film industry and the tech industry (not necessarily exclusive of one another), he likes to design games and hack on code (also not necessarily exclusive of one another).
Contenu similaire
Parcourir par canal
Automatisation
Les dernières nouveautés en matière d'automatisation informatique pour les technologies, les équipes et les environnements
Intelligence artificielle
Actualité sur les plateformes qui permettent aux clients d'exécuter des charges de travail d'IA sur tout type d'environnement
Cloud hybride ouvert
Découvrez comment créer un avenir flexible grâce au cloud hybride
Sécurité
Les dernières actualités sur la façon dont nous réduisons les risques dans tous les environnements et technologies
Edge computing
Actualité sur les plateformes qui simplifient les opérations en périphérie
Infrastructure
Les dernières nouveautés sur la plateforme Linux d'entreprise leader au monde
Applications
À l’intérieur de nos solutions aux défis d’application les plus difficiles
Programmes originaux
Histoires passionnantes de créateurs et de leaders de technologies d'entreprise
Produits
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Services cloud
- Voir tous les produits
Outils
- Formation et certification
- Mon compte
- Assistance client
- Ressources développeurs
- Rechercher un partenaire
- Red Hat Ecosystem Catalog
- Calculateur de valeur Red Hat
- Documentation
Essayer, acheter et vendre
Communication
- Contacter le service commercial
- Contactez notre service clientèle
- Contacter le service de formation
- Réseaux sociaux
À propos de Red Hat
Premier éditeur mondial de solutions Open Source pour les entreprises, nous fournissons des technologies Linux, cloud, de conteneurs et Kubernetes. Nous proposons des solutions stables qui aident les entreprises à jongler avec les divers environnements et plateformes, du cœur du datacenter à la périphérie du réseau.
Sélectionner une langue
Red Hat legal and privacy links
- À propos de Red Hat
- Carrières
- Événements
- Bureaux
- Contacter Red Hat
- Lire le blog Red Hat
- Diversité, équité et inclusion
- Cool Stuff Store
- Red Hat Summit