Red Hat Enterprise Linux's edge management capabilities are now built into Red Hat Insights to help organizations more securely manage and scale deployments at the edge. The benefits of zero-touch provisioning, system health visibility, and quick security remediations are available from a single interface. Most importantly, RHEL edge hosts are built using rpm-ostree, which enables you to treat your host OS updates like any mobile phone.
Podman is a powerful container engine that provides everything needed to run containers and groups of containers, known as pods. Podman’s origins are derived from the Docker project and strive for compatibility and architectural simplicity. Podman will enable you to install applications on your edge device in a stable, repeatable manner, like on a mobile phone.
The combination of RHEL for Edge and Podman minimizes the amount of work required to maintain applications due to the increased automation and simplicity built into the solution. Environments that are periodically connected or have unreliable connections to the internet are well-suited for the deployment of this solution.
In this blog, I will show how you can quickly build and deploy a RHEL image for edge devices, with minimal interaction between the host and the operator. I will then show you how to deploy the Pi-hole application as a service (but the model is valid for many other applications besides Pi-hole). This solution can be automatically updated to successive versions of RHEL, and the containerized application will automatically update as new versions are released. As well, if the container fails to start after an update, it will be automatically reverted to the previous version by Podman.
The diagram above shows a concept where a host in Data Center 1 is configured to run the Pi-hole application via Podman. Hosts in Data Center 1, Data Center 2, and the internet can be configured to communicate with the Wireguard virtual private network (VPN). This blog entry will only discuss the configuration of the Wireguard server in Data Center 2 and the Wireguard client in Data Center 1 running Pi-hole.
Edge management
Red Hat provides a security-driven and scalable service for deploying and managing edge devices at console.redhat.com
. Red Hat Enterprise Linux provides a consistent, flexible, and security-focused platform for your enterprise workloads, enabling faster data delivery and driving new innovations at the edge. More information is available here.
RHEL for Edge is a group of tools that make it easy to manage compute resources at the edge. The rpm-ostree subsystem forms the core of these technologies. rpm-ostree enables you to perform atomic updates and rollbacks of your operating system so that changes, updates, and upgrades can be replicated and performed on devices that may be difficult to access or periodically disconnected from the internet. For more information on rpm-ostree, please visit this page.
Imagine treating your operating system upgrade as copying over a snapshot of upgraded files. It becomes unnecessary to make complete copies of files; only changes need to be replicated. Also, if the upgrade fails to work, you can roll back to the previous version of your operating system.
Create the image
First, you’ll create a RHEL edge image in Insights. I wrote about how to create edge images at console.redhat.com here.
You want to create an RHEL for Edge Installer image like in the image below.
Install container-tools
meta package to get all the Podman tools you’ll need. You’ll also need wireguard-tools
to set up the Wireguard VPN.
Install the image
Once the image is built, download it and install it on the device of your choice. The installation details are available in the previously linked blog post.
VPN with Wireguard
For my environment, I decided to set up Wireguard to VPN my edge device to my data center network. You can use whatever VPN solution you like, but in this case, Wireguard makes a lot of sense because it’s easy to set up, and you can avoid having to customize the edge network to accommodate the VPN. Wireguard is particularly easy to use when dealing with Network Address Translation and firewalls. For more information on Wireguard, see the official Red Hat documentation here.
Another benefit of configuring a Wireguard VPN is simplifying the management of edge devices by creating a single network domain, eliminating the need to maintain up-to-date knowledge of external networking and routing. Rather than having to look up and configure edge devices behind NATs and firewalls, you can simply connect directly to an edge device by IP or hostname.
Once the Wireguard VPN is configured, you can connect to remote hosts without worrying about port forwarding, NAT, or routing.
Below are the steps to set up Wireguard.
Install Wireguard
Edge device
Configure the edge image (see above) to be installed with the wireguard-tools
package.
Server
On the Wireguard server, run the following command as root:
dnf install -y wireguard-tools
The Wireguard server used in this example is not using an Edge management image.
Generate encryption keys
Wireguard consists of server and client components. In this example, a Wireguard server listens for incoming connections from Wireguard clients. The Wireguard server and client(s) authenticate and encrypt traffic with encryption keys. For more information about Wireguard protocols, visit here.
Run the following on both the client and server to generate private and public keys:
wg genkey | tee /etc/wireguard/$HOSTNAME.private.key | wg pubkey > /etc/wireguard/$HOSTNAME.public.key
Configure the server
On the server, create the following configuration file: /etc/wireguard/wg0.conf
. It should contain something similar to the following:
[Interface] Address = 192.0.2.1 ListenPort = 51820 PrivateKey = server_private_key [Peer] PublicKey = client_public_key AllowedIPs = 192.0.2.2/24
The server IP address is set to 192.0.2.1
. The client’s IP address is set to 192.0.2.2
. On the server, server_private_key is contained in the/etc/wireguard
directory in a file named with your server’s hostname. On the client, you’ll find client_public_key
in the /etc/wireguard
directory with your client’s public key.
ListenPort = 51820
tells the server to listen for incoming connections from Wireguard clients on port 51820. You’ll need to open port 51820 UDP incoming on your firewall(s).
Next, you need to create a virtual network interface from the server configuration. Run the following command:
nmcli connection import type wireguard file /etc/wireguard/wg0.conf
This will create a new NIC called wg0
. To view it, use the command nmcli dev
.
$ sudo nmcli dev DEVICE TYPE STATE CONNECTION eth0 ethernet connected System eth0 lo loopba connected lo wg0 wireguard connected wg0
Configure the client
On the edge device (aka the client), create the following configuration file: /etc/wireguard/wg0.conf
. It should contain something similar to the following:
[Interface] Address = 192.0.2.2/24 PrivateKey = client_private_key [Peer] PublicKey = server_public_key AllowedIPs = 192.0.2.0/24 Endpoint = server_public_ip:51820 PersistentKeepalive = 20
It’s confusing to describe the Wireguard server as a Peer
so just keep that in mind. The Endpoint = server_public_ip:51820
key-value pair specifies the public IP address and port of the Wireguard server.
Just as you did with the server, run the following command to create the Wireguard virtual NIC:
nmcli connection import type wireguard file /etc/wireguard/wg0.conf
Check to see if the device was created successfully.
$ sudo nmcli dev DEVICE TYPE STATE CONNECTION eth0 ethernet connected System eth0 lo loopba connected lo wg0 wireguard connected wg0 Enp2s0 ethernet unavailable --
Test the connection
On the server, run the command wg
.
$ sudo wg interface: wg0 public key: ************ private key: (hidden) Listening port: 51820 peer: ********* endpoint: ********* allowed ips: 192.0.2.0/24 latest handshake: 1 minute, 27 seconds ago transfer: 30.61 KiB received, 8 KiB sent
You can see that data is being transferred between the server and the peer.
On the edge device, run wg
:
$ sudo wg interface: wg0 public key: ************ private key: (hidden) Listening port: 49489 peer: ********* endpoint: ********* allowed ips: 192.0.2.0/24 latest handshake: 1 minute, 24 seconds ago transfer: 7.90 KiB received, 30.20 KiB sent persistent keepalive: every 20 seconds
As with the server above, you can see that data is being transferred.
From the edge device, ping the server 192.0.2.1
address to check that it works. From the server, you can ping 192.0.2.2
.
Podman
Configure Podman containers to persistently run as a service under a non-root user account
Once the edge device is installed and configured, you’ll configure Podman to run containers persistently in your non-root user account as a service. If you don’t do this, the container will stop running once you log out of your account.
You want to configure your container to run as a service in your non-root account so that you can configure it to automatically start as a systemd service when the edge device boots up. As well, it’s generally considered to be more secure to run a container in a non-root account. For more information, Scott McCarty wrote extensively about rootless containers here.
You need to configure systemd, the service manager in Red Hat Enterprise Linux, to run services belonging to your account when your user account is logged out. Run the following command:
loginctl enable-linger <your username>
Remember, if you don’t run this command, your rootless container service will stop running when you log out of your host!
Configure and run Pi-hole
In this step, I’ll briefly discuss the options I used to configure my Pi-hole and how these options are relevant to other types of application containers. All of these tasks should be performed as a non-root user (except for modifying the sysctl.conf file). I’ll be using a tool from the Quadlet project to configure the container, create persistent storage and manage the systemd unit files. The Quadlet component of Podman is in technology preview in RHEL 9.2 and reduces the complexity of managing containers. For more information on Quadlet, please read this blog.
The files used in this section are available in this github repository.
What is Pi-hole, and why are you using it?
I have chosen the Pi-hole application because it is complicated enough that it could be similar to other applications other people would like to install. In particular, Pi-hole requires persistent storage and the use of a privileged network port.
Ports
Pi-hole acts as a DNS resolver in your network. When other computers in your network perform DNS lookups against the Pi-hole server, ad services, and clickbait/malicious sites are blocked. Port 53 UDP and TCP must be open for DNS to function. Port 80 TCP is opened to enable access to the web-based management UI. For more information on running a Pi-hole container, read this documentation.
By default, Red Hat Enterprise Linux forbids non-root services to bind to port 53. We are running Pi-hole as a non-root user. You’ll have to run the following command to tell the operating system to treat port 53 as an unprivileged port.
cat net.ipv4.ip_unprivileged_port_start = 53 >> /etc/sysctl.conf
Then run the following command to load the change from the /etc/sysctl.conf
file:
sysctl -p
Configure persistent storage
First, make a directory in your home directory to store your container configuration files.
mkdir -p ~/.config/containers/systemd/
Podman provides a facility for defining persistent storage through configuration files. Our Pi-hole app requires two persistent volumes so you’ll need to create a configuration file for each of the volumes in the directory ~/.config/containers/systemd/
.
The two volume files are pihole-etc.volume
and pihole-dnsmasq.volume
. The former configured a volume that stores Pi-hole-specific configuration and log files. The latter is for dns configuration files. Both volume files are identical in content.
Both volume files contain the following contents:
[Volume] User=your_user Group=your_user
your_user
is the user account that Pi-hole will run in. Specifying the User and Group ensures that the persistent storage can be accessed by your account.
The names of the volume files will be referenced in the container configuration file in the next section..
Configure and download the container
Podman makes it easy to download and configure a container to run as a non-root service with a single configuration file. If you are familiar with docker-compose, you may be delighted to use this new feature.
Create a file called ~/.config/containers/systemd/pihole.container
. Copy the following into pihole.container
:
[Service] Restart=always [Container] ContainerName=pihole Image=docker.io/pihole/pihole:latest Label="io.containers.autoupdate=registry" Environment=TZ=America/Vancouver Environment=SERVERIP=your_edge_device_ip Environment=WEBPASSWORD=webgui_password PodmanArgs=--dns=127.0.0.1 --dns=8.8.8.8 Volume=pihole-etc.volume:/etc/pihole:z Volume=pihole-dnsmasq.volume:/etc/dnsmasq.d:z PublishPort=53:53/tcp PublishPort=53:53/udp PublishPort=80:80/tcp [Install] WantedBy=default.target
The pihole.container
file is packed with information. This particular configuration has three subsections: Service
, Container
, and Install
.
Service
This section adds service parameters or how the container will run as a service.
Container
This section defines the parameters required to run Pi-hole. You can read more about them here. Replace the following values with your own:
your_edge_device_ip
webgui_password
Also include the following label.
Label="io.containers.autoupdate=registry"
This parameter will enable Podman to pull down updates from the container’s registry and restart it. You can automate these updates with a systemd timer
, which I will discuss later in this post. For more information on automatic updates and rollback, please read this blog.
Volume=pihole-etc.volume:/etc/pihole:z
and Volume=pihole-dnsmasq.volume:/etc/pihole:z
are the volume files you created in the previous step and they are mapped to directories within the container.
Install
This section contains the key-value pair WantedBy=default.target
which tells systemd to start the container as a service after an OS reboot.
Run the Pi-hole container service
Run the following command to tell systemd about your new service defined in pihole.container
, as well as pihole-etc.volume
and pihole-dnsmasq.volume
.
systemctl --user daemon-reload
Let’s check your systemd unit files. Run the following:
/usr/libexec/podman/quadlet --dryrun --user
If the configuration files are free of syntax errors, you’ll see something like this.
[myee@protectli1 ~]$ /usr/libexec/podman/quadlet -dryrun -user quadlet-generator[6088]: Loading source unit file /var/home/myee/.config/containers/systemd/pihole-dnsmasq.volume quadlet-generator[6088]: Loading source unit file /var/home/myee/.config/containers/systemd/pihole-etc.volume quadlet-generator[6088]: Loading source unit file /var/home/myee/.config/containers/systemd/pihole.container ---pihole-dnsmasq-volume.service--- [X-Volume] User=myee Group=myee [Unit] RequiresMountsFor=%t/containers [Service] ExecStart=/usr/bin/podman volume create --ignore --opt o=uid=0,gid=0 systemd-pihole-dnsmasq Type=oneshot RemainAfterExit=yes SyslogIdentifier=%N ---pihole-etc-volume.service--- [X-Volume] User=myee Group=myee [Unit] RequiresMountsFor=%t/containers [Service] ExecStart=/usr/bin/podman volume create --ignore --opt o=uid=0,gid=0 systemd-pihole-etc Type=oneshot RemainAfterExit=yes SyslogIdentifier=%N ---pihole.service--- [Service] Restart=always Environment=PODMAN_SYSTEMD_UNIT=%n KillMode=mixed ExecStopPost=-/usr/bin/podman rm -f -i --cidfile=%t/%N.cid ExecStopPost=-rm -f %t/%N.cid Delegate=yes Type=notify NotifyAccess=all SyslogIdentifier=%N ExecStart=/usr/bin/podman run --name=pihole --cidfile=%t/%N.cid --replace --rm --log-driver passthrough --runtime /usr/bin/crun --cgroups=split --sdnotify=conmon -d -v systemd-pihole-etc:/etc/pihole:z -v systemd-pihole-dnsmasq:/etc/dnsmasq.d:z --publish 53:53/tcp --publish 53:53/udp --publish 80:80/tcp --env SERVERIP=10.0.0.30 --env TZ=America/Vancouver --env WEBPASSWORD=lol_u_wish --label io.containers.autoupdate=registry --dns=127.0.0.1 --dns=8.8.8.8 docker.io/pihole/pihole:latest [X-Container] ContainerName=pihole Image=docker.io/pihole/pihole:latest Label="io.containers.autoupdate=registry" Environment=TZ=America/Vancouver Environment=SERVERIP=10.0.0.30 Environment=WEBPASSWORD=lol_u_wish PodmanArgs=--dns=127.0.0.1 --dns=8.8.8.8 Volume=pihole-etc.volume:/etc/pihole:z Volume=pihole-dnsmasq.volume:/etc/dnsmasq.d:z PublishPort=53:53/tcp PublishPort=53:53/udp PublishPort=80:80/tcp [Install] WantedBy=default.target [Unit] SourcePath=/var/home/myee/.config/containers/systemd/pihole.container RequiresMountsFor=%t/containers Requires=pihole-etc-volume.service After=pihole-etc-volume.service Requires=pihole-dnsmasq-volume.service After=pihole-dnsmasq-volume.service
In the [Unit]
section, notice that Podman is smart enough to know that the storage unit files should be started before the container!
Start the service:
systemctl --user start pihole.service
This command will download the container from the registry and run it, based on the parameters defined in pihole.container
.
Test the Pi-hole container works by navigating to the web UI at http://edge_device_ip/admin/login.php
If you’d tried to do all this without the pihole.container
specification or the volume configuration files, you’d have several more steps to perform.
Testing auto-updates
Test your auto-updates are configured properly with the following command:
podman auto-update --dry-run UNIT CONTAINER IMAGE POLICY UPDATED Container-pihole.service 0bfa2dad... docker.io/pi... registry false
Enable podman-auto-updates to run automatically
Podman includes a systemd timer that can schedule when the auto-update runs. Think of systemd timers as a more powerful and declarative version of cron. You can enable this by simply running:
systemctl --user enable --now podman-auto-update.service
By default this will check for an updated container image daily. You can easily restrict it to running weekly and at a time where you're less likely to be disrupted by an upgrade. You can create a drop-in for the timer using systemctl --user edit podman-auto-update.timer
.
How does this apply to your environment and more importantly, your job?
In a nutshell, we’ve performed the following steps to set up an edge infrastructure:
- create an rpm-ostree based Red Hat Enterprise Linux image in
console.redhat.com
- configured a VPN to the edge device
- created a Podman container configuration file, as well as persistent storage configuration
- automatically configured the container to run as a systemd service
- enabled automatic updates to the container
This solution is fantastic because you can apply OS updates automatically with minimal risk of the device failing. Should the update fail, the process to revert the update is simple and initiated with a few button clicks from console.redhat.com
.
If the container is updated with podman auto-update
and fails to start, it will automatically revert to the previous version.
All at once in the pihole.container
file, you defined a container to be run as a service, to update automatically, and to download the container image if required. If you are deploying multiple edge devices that require the same container, simply copy over the Podman container configuration files and run systemctl --user daemon-reload and systemctl --user start
.
This process requires several more steps without these configuration files.
The Wireguard VPN creates a network domain that simplifies network connectivity between hosts, making it easier to control or manage them without having to worry about network configuration. Instead of looking up the external IP addresses of your edge devices, you only need to know the Wireguard interface IPs. The best part is you don’t have to configure port forwarding if your device is sitting behind a router performing network address translation.
Further reading
Acknowledgments
A big thanks to Ben Breard for his help and motivational chats.
執筆者紹介
As a Senior Principal Technical Marketing Manager in the Red Hat Enterprise Linux business unit, Matthew Yee is here to help everyone understand what our products do. He joined Red Hat in 2021 and is based in Vancouver, Canada.
チャンネル別に見る
自動化
テクノロジー、チームおよび環境に関する IT 自動化の最新情報
AI (人工知能)
お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート
オープン・ハイブリッドクラウド
ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。
セキュリティ
環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報
エッジコンピューティング
エッジでの運用を単純化するプラットフォームのアップデート
インフラストラクチャ
世界有数のエンタープライズ向け Linux プラットフォームの最新情報
アプリケーション
アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細
オリジナル番組
エンタープライズ向けテクノロジーのメーカーやリーダーによるストーリー
製品
ツール
試用、購入、販売
コミュニケーション
Red Hat について
エンタープライズ・オープンソース・ソリューションのプロバイダーとして世界をリードする Red Hat は、Linux、クラウド、コンテナ、Kubernetes などのテクノロジーを提供しています。Red Hat は強化されたソリューションを提供し、コアデータセンターからネットワークエッジまで、企業が複数のプラットフォームおよび環境間で容易に運用できるようにしています。
言語を選択してください
Red Hat legal and privacy links
- Red Hat について
- 採用情報
- イベント
- 各国のオフィス
- Red Hat へのお問い合わせ
- Red Hat ブログ
- ダイバーシティ、エクイティ、およびインクルージョン
- Cool Stuff Store
- Red Hat Summit