Fascinating changes are happening in the automotive industry. Car manufacturers are moving away from discrete Electronic Control Units (ECUs) in favor of consolidating computing resources into bigger, high-performance computers. The move to a more dynamic system that enables software-defined vehicles (SDVs) is bringing two formerly separate worlds together: automotive and the datacenter-centric IT industry.

Red Hat is a key player in this transformation. In 2022, Red Hat announced the intention to develop the Red Hat In-Vehicle OS, a general-purpose in-vehicle operating system destined to become the future of SDVs. Later that same year, we published a blog to explain how running containers in cars is driving the evolution towards SDVs.

That being said, there is still a lot that separates these two worlds, starting with expectations about system speed and responsiveness.

In the world of datacenters, servers can take several minutes to boot, and restarts sometimes happen as infrequently as once per a year. In this type of environment, the overhead of a few seconds to start applications in a container is irrelevant.

In automobiles, however, systems must boot within seconds, and they reboot almost every time the car goes through a power cycle. There is also an important user-experience aspect regarding how long it takes for a system to start and be usable. When we get into our cars, we want to drive immediately. Every second waiting to start is unpleasant. Delays make us feel that the systems are getting in the way.

We realized the differences in system speed and responsiveness expectations while building the Red Hat In-Vehicle Operating System. This gave us an opportunity to investigate the overhead of starting an application in a container via Podman, something we didn't concentrate on before.

Dan Walsh published the outcome of this investigation in his blog post (warning: spoiler alert in the title): How we achieved a 6-fold increase in Podman startup speed.

Dan's investigation explored the different areas that have been optimized, but he did not make recommendations regarding how to analyze the improvements. One way to trace and analyze performance is by using eBPF.

eBPF is a Linux kernel technology that provides flexible and safe instrumentation of the kernel without requiring any changes to the kernel code itself. It can be used to monitor, secure and optimize various systems, including those you'll find in a vehicle. eBPF has many advantages, but it also has some limitations. eBPF programs must be written using a restricted instruction set, which can be limiting in some use cases. Developers may need to find workarounds or optimizations to achieve the desired functionality.

Additionally, the kernel verifier that tries to prevent potential security risks imposes restrictions on the program's complexity and loop structures. This may limit the range of programs that can be implemented using eBPF. Access to specific kernel structures is limited. Even though tracepoints provide a stable API to interact with, depending on the use case, developers might need to attach kprobes to kernel functions, which doesn't guarantee the stability of their signature between kernel versions.

While it is straightforward to run an eBPF program that analyzes the container startup time on Fedora or RHEL, performing the same steps on AutoSD requires some additional work.

Build the OS

First, you need to build a customized version of the operating system. To do so, configure the build host as described in the Automotive SIG documentation.

Then clone the sample-images repository and run the build:

$ git clone https://gitlab.com/CentOS/automotive/sample-images.git
$ cd osbuild-manifests/
$ make cs9-qemu-containerperf-ostree@bpftrace.x86_64.qcow2 DEFINES='extra_rpms=["bpftrace"]'

After the build is finished, run the VM and log in as root using the default password (password):

$ ./runvm cs9-qemu-containerperf-ostree@bpftrace.x86_64.qcow2

After you've logged in, download the eBPF program and container image used in this demo:

$ podman pull quay.io/fedora/fedora:latest
$ curl -X GET https://raw.githubusercontent.com/containers/podman/main/hack/fork_exec_snoop.bt \
-o fork_exec_snoop.bt
$ chmod +x fork_exec_snoop.bt

Run the eBPF program and start the tracing:

$ ./fork_exec_snoop.bt > fork_exec.log & sleep 10 && \
podman run --pull=never --network=host --security-opt seccomp=unconfined \
quay.io/fedora/fedora:latest true

Make sure no other process calls Podman during the measurement, because it might pollute the log output even though it would not affect the results.

Finally, inspect the log and look for the syscalls:sys_exit_exec entry to get the startup time on your system. This is described in the original performance tracing blog post.

$ less fork_exec.log

So far, we have measured Podman starting a single container, however an actual workload may include many containers running simultaneously. The overall performance of the system might then depend on various factors, such as the size of the executables run in a container, the total number of containers starting at once, or the timing of a container's start of execution relative to the state of other processes on the system. eBPF programming offers one way to measure, monitor and investigate performance in these situations.

Try AutoSD

This introduction into eBPF on AutoSD only scratches the surface of potential use cases. Energy-based system statistics have been implemented for Kubernetes using eBPF and might be very valuable for battery-electric vehicles. For developers of container images or systems that run AutoSD with fast startup requirements, it's worth examining the outcomes of this measurement in detail to improve the performance of the system.


Pierre-Yves Chibon (aka pingou) is a Principal Software Engineer who spent nearly 15 years in the Fedora community and is now looking at the challenges the automotive industry offers to the FOSS ecosystems.

Read full bio

Laura has written documentation about a wide variety of software and hardware over the years, but she most enjoys writing about IoT technology. She joined Red Hat in 2018. 

Read full bio

Paul Wallrabe, a former consultant, boasts expertise in Kubernetes-backend development and the elimination of toil through automation in developer toolchains. With a solid background in the automotive industry, he has turned his attention to the unique challenges that this sector presents to Linux.

Read full bio


automation icon


有关技术、团队和环境 IT 自动化的最新信息

AI icon



open hybrid cloud icon



security icon



edge icon



Infrastructure icon


全球领先企业 Linux 平台的最新动态

application development icon



Original series icon