The "build once, run anywhere" approach has gained momentum alongside the growing use of containers. It has mainly targeted cloud-native development, but by using Podman, systemd, and OSBuild, it can now also apply to developing edge devices.
[ Getting started with containers? Check out Deploying containerized applications: A technical overview. ]
What does "build once, run anywhere" mean?
"Build once, run anywhere" was coined by Sun Microsystems long ago to describe the ability to write Java code once and run it anywhere. Recently, this phrase has expanded to include containers, meaning that developers can package their entire application and all its dependencies in a container and run it anywhere.
What does "running a container" really mean?
To run a container, you need the container image and its running instructions. Running instructions can take different forms. You can run commands in a terminal, a docker-compose file, or a Kubernetes YAML file. Even though you can reuse the same container in each scenario, the instruction's format changes. But what if you could use the same format everywhere?
By using podman kube play
, you can pass Kubernetes manifests to Podman, which exports the objects into Podman objects.
Using this mechanism, you can employ the "build once, run anywhere" principle not just for containers but also for the instructions about how to run these containers.
Using systemd to monitor containers
Although you can run Podman as a daemon, in most cases, it runs without one. As a result, the containers are not monitored, so failed containers remain down with no service to restart them.
In Linux systems, systemd monitors processes. When you use a daemon to monitor a container, systemd monitors the daemon as well. You can skip added complexity by allowing systemd to monitor the container directly.
[ Get the Systemd commands cheat sheet ]
You can use systemd to run and manage containerized applications. That means systemd starts a containerized application and manages its entire lifecycle. Podman simplifies this process with the podman generate systemd
command, which generates a systemd unit file for a specified container or pod. Alternatively, you can use the systemd template unit file for Kubernetes YAML files.
As of version 4.4, Podman includes built-in support for Quadlet, a service that allows users to manage containerized services using service-like .container
or .kube
files, but that's enough content for a separate article.
Containers on the edge
OSBuild is an open source build pipeline that allows you "to create images of your Linux operating system in a reliable fashion, isolating the image creation from your host operating system, and producing a reliable, well-defined image ready to be deployed." Using OSBuild, you can embed the containers, Kubernetes YAML files, and systemd unit files to create a device image that runs the workloads.
Bring it together with a demo
This demo uses a sample automotive application implemented in the sample automotive applications repository, along with a containerized version of vsomeip, which implements SOME/IP (Scalable service-Oriented MiddlewarE over IP), a communications protocol for automotive applications. Of note:
- A prebuilt version of these containers for AArch64 and x86-64 is available in the automotive container registry.
- All Kubernetes YAML files are located in the Git repository.
- Although the instructions get the files directly from GitLab, you may get the code by cloning the repo:
git clone https://gitlab.com/CentOS/automotive/sample-images.git
- This demo uses Kubernetes and the command
kubectl
, but you can achieve the same thing using OpenShift and theoc
command.
[ Download the Podman basics cheat sheet ]
1. Deploy on Kubernetes
a. Create the namespace:
kubectl create namespace build-once-run-anywhere
b. Deploy vsomeip:
kubectl apply -n build-once-run-anywhere -f https://gitlab.com/CentOS/automotive/sample-images/-/raw/main/osbuild-manifests/files/ocp/vsomeip.yml
c. Deploy the engine application:
kubectl apply -n build-once-run-anywhere -f https://gitlab.com/CentOS/automotive/sample-images/-/raw/main/osbuild-manifests/files/ocp/engine.yml
d. Deploy the radio application:
kubectl apply -n build-once-run-anywhere -f https://gitlab.com/CentOS/automotive/sample-images/-/raw/main/osbuild-manifests/files/ocp/radio.yml
e. After the deployments are running, you can monitor changes in the radio service's volume based on the notification from the engine service:
kubectl get pods -n build-once-run-anywhere -l app=radio -o jsonpath={.items[0].metadata.name} | xargs kubectl logs -n build-once-run-anywhere -f
RADIO: Started main thread
RADIO: Started playing
Engine Service is NOT available.
Engine Service is available.
RADIO: Playing song "R-Cali" by A$AP Rocky feat. Aston Matthews & Joey Fatts (on Radio Los Santos) 50% volume
RADIO: Lowering volume due to reverse
RADIO: Playing song "R-Cali" by A$AP Rocky feat. Aston Matthews & Joey Fatts (on Radio Los Santos) 30% volume
RADIO: Playing song "Swimming Pools, Drank" by Kendrick Lamar (on Radio Los Santos) 30% volume
RADIO: Restoring volume due to cancelled reverse
RADIO: Playing song "Swimming Pools, Drank" by Kendrick Lamar (on Radio Los Santos) 50% volume
2. Deploy locally using Podman
a. Deploy vsomeip:
podman kube play https://gitlab.com/CentOS/automotive/sample-images/-/raw/main/osbuild-manifests/files/ocp/vsomeip.yml
b. Deploy the engine application:
podman kube play https://gitlab.com/CentOS/automotive/sample-images/-/raw/main/osbuild-manifests/files/ocp/engine.yml
c. Deploy the radio application:
podman kube play https://gitlab.com/CentOS/automotive/sample-images/-/raw/main/osbuild-manifests/files/ocp/radio.yml
d. After the deployments are running, you can monitor changes in the radio service's volume based on the notification from the engine service:
podman logs -f radio-pod-0-radio
RADIO: Started main thread
RADIO: Started playing
RADIO: Playing song "How It Was" by DJ Esco feat. Future (on Radio Los Santos) 50% volume
Engine Service is NOT available.
Engine Service is available.
RADIO: Playing song "Swimming Pools, Drank" by Kendrick Lamar (on Radio Los Santos) 50% volume
RADIO: Lowering volume due to reverse
RADIO: Playing song "Swimming Pools, Drank" by Kendrick Lamar (on Radio Los Santos) 30% volume
RADIO: Playing song "Hood Gone Love It" by Jay Rock feat. Kendrick Lamar (on Radio Los Santos) 30% volume
RADIO: Restoring volume due to cancelled reverse
RADIO: Playing song "Hood Gone Love It" by Jay Rock feat. Kendrick Lamar (on Radio Los Santos) 50% volume
3. Embed inside an image
For more information about how to build an image, see Building images on the Automotive SIG documentation site. The image manifest of this demo is the ocp.mpp
file.
Contents of the manifest file
The manifest describes the different steps required to create the image.
Embed the container image
- type: org.osbuild.skopeo
inputs:
images:
type: org.osbuild.containers
origin: org.osbuild.source
mpp-resolve-images:
images:
- source: registry.gitlab.com/centos/automotive/sample-images/demo/auto-apps
tag: latest
- source: registry.gitlab.com/centos/automotive/sample-images/demo/vsomeip
tag: v0.1
options:
destination:
type: containers-storage
storage-path: /usr/share/containers/storage
Copy the unit and Kubernetes YAML files
- type: org.osbuild.copy
inputs:
ocp-vsomeip:
type: org.osbuild.files
origin: org.osbuild.source
mpp-embed:
id: vsomeip.yml
path: ../files/ocp/vsomeip.yml
unit-vsomeip:
type: org.osbuild.files
origin: org.osbuild.source
mpp-embed:
id: vsomeip.service
path: ../files/ocp/vsomeip.service
options:
paths:
- from:
mpp-format-string: input://ocp-vsomeip/{embedded['vsomeip.yml']}
to: tree:///demo/ocp/vsomeip.yml
- from:
mpp-format-string: input://unit-vsomeip/{embedded['vsomeip.service']}
to: tree:///usr/lib/systemd/system/vsomeip.service
Notice the similar copy operations for the engine and radio services.
Enable the radio service
- type: org.osbuild.systemd
options:
enabled_services:
- radio.service
Build the image
Now it's time to build the image.
For the demo
To run the demo image on an ARM machine on AWS, make the target with:
cd osbuild-manifests
make cs9-aws-ocp-regular.aarch64.img
Building other images
Alternatively, you can build the demo on x86 hardware and run it with QEMU:
cd osbuild-manifests
make cs9-aws-ocp-regular.x86_64.qcow2
c. You can find a convenient tool to simplify running the qcow2 image in the repository at osbuild-manifests/runvm. This can make it easier for you to run the image with QEMU virtualization.
[ Try the Getting started with Red Hat OpenShift Service on AWS (ROSA) learning path. ]
Convert the image into an AWS AMI
To convert the image you created into an Amazon Machine Image (AMI), follow the instructions below.
Prerequisites
- Create an S3 bucket to upload the image to.
- Access to the AWS command-line interface (CLI).
- AWS credentials:
- If you are running it on your own machine, configure the AWS CLI with your credentials.
- If you are running on an EC2 instance, attach an identity and access management (IAM) instance profile with the VMImporter policy to your instance.
Create the AMI
a. From the osbuild-manifests
directory, run the export-image-aws.sh
tool:
./tools/export-image-aws.sh cs9-aws-ocp-regular.aarch64.img <The name of the S3 Bucket> 8
b. Once completed, you can find the new AMI with the name cs9-aws-ocp-regular.aarch64
.
Use the newly created AMI
Once you've created the AMI, you can start using it.
Start an EC2 instance
- Locate the AMI you created in the previous step in your AWS console (its name is
cs9-aws-soafee-regular.aarch64
). - Click Launch instance from AMI.
- Complete the instance-launching flow:
- Give your instance a name.
- Choose the instance type. The recommended one is c6g.large.
- Set (or create) your key pair to Secure Shell (SSH) into the instance.
- Set (or create) the Security Group that allows SSH access to the machine from your IP.
- Click Launch instance.
Verify the service operation
SSH into your instance and check the logs coming from the radio service:
journalctl -u radio -f
RADIO: Started main thread
RADIO: Started playing
RADIO: Playing song "How It Was" by DJ Esco feat. Future (on Radio Los Santos) 50% volume
Engine Service is NOT available.
Engine Service is available.
RADIO: Playing song "R-Cali" by A$AP Rocky feat. Aston Matthews & Joey Fatts (on Radio Los Santos) 50% volume
RADIO: Playing song "Swimming Pools, Drank" by Kendrick Lamar (on Radio Los Santos) 50% volume
RADIO: Lowering volume due to reverse
RADIO: Playing song "Swimming Pools, Drank" by Kendrick Lamar (on Radio Los Santos) 30% volume
RADIO: Restoring volume due to cancelled reverse
RADIO: Playing song "Swimming Pools, Drank" by Kendrick Lamar (on Radio Los Santos) 50% volume
Build once, run anywhere
Container images are the manifestation of the "build once, run anywhere" approach. Using Kubernetes, Podman, and OSBuild (and, in the future, Quadlet), "anywhere" is now bigger than ever.
[ Boost security, flexibility, and scale at the edge with Red Hat Enterprise Linux. ]
About the author
Ygal Blum is a Principal Software Engineer who is also an experienced manager and tech lead. He writes code from C and Java to Python and Golang, targeting platforms from microcontrollers to multicore servers, and servicing verticals from testing equipment through mobile and automotive to cloud infrastructure.
More like this
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit