Subscribe to the feed

A few weeks ago, we shared our thoughts about running containers in cars. Today, let's discuss our vision for the experience of developing containerized in-vehicle applications.

In our previous blog post, we discussed using Podman and systemd to run containers in cars. You might be asking yourself:

  • Does this mean that developers won’t be able to leverage their experience working with Kubernetes or Red Hat OpenShift?
  • Will this prevent the automotive industry from attracting new talent because developers need to learn yet another new process for container management and deployment?
  • Do we need to create new CI/CD (continuous integration/continuous delivery) pipelines rather than relying on existing infrastructure and tools, and won’t this slow down the development process?

In a word—no.  Let us explain further.

If you work with containers, you are most likely familiar with Kubernetes objects often expressed in .yaml format (Kubernetes YAML). Using Kubernetes YAML to describe how to deploy containerized applications is becoming a de facto standard. We want to leverage this standard to enable a model in which local application development, virtual testing, hardware in the loop testing and final deployments share common ground, namely using Kubernetes YAML to describe how an application should be deployed.

Podman has the ability to run containers using Kubernetes YAML files as input. Our recent blog post, How to "build once, run anywhere" at the edge with containers, describes how to deploy containers by using either Podman or Kubernetes (and thus OpenShift) using the same Kubernetes YAML. Following the demo in that blog post, you will retrieve a set of Kubernetes YAML files—describing how to deploy an application—that you may use to deploy that application on a Kubernetes or OpenShift cluster using kubectl apply (or oc apply). Then, you will use the same files to deploy and run that application locally using podman kube play.

Using this mechanism, we can foresee an architecture in which developers build and test their applications locally using Podman. Once satisfied, they can then push their source files, Containerfile, and Kubernetes YAML to one or more repositories, all of which would then be used in a continuous integration pipeline to compile the code into a container image. That container image can then be deployed according to the instructions in the Kubernetes YAML and tested as desired. If the tests pass, the application can then move on to the next testing phase, be that another virtual testing phase or onto actual hardware. All of these files (sources, Containerfile and Kubernetes YAML) are sufficient to enable running (and thus testing) the application in all contexts.

In summary, we wanted to point you to our previous blog post to highlight what it means for the automotive industry, which is to say that Kubernetes knowledge can be transferred from the IT industry to the automotive sector. This will help reduce the amount of net-new knowledge engineers need to learn when joining the automotive industry and accelerate the development process of containerized applications by leveraging the existing container ecosystem.

Learn more


About the authors

Pierre-Yves Chibon (aka pingou) is a Principal Software Engineer who spent nearly 15 years in the Fedora community and is now looking at the challenges the automotive industry offers to the FOSS ecosystems.

Read full bio

Ygal Blum is a Principal Software Engineer who is also an experienced manager and tech lead. He writes code from C and Java to Python and Golang, targeting platforms from microcontrollers to multicore servers, and servicing verticals from testing equipment through mobile and automotive to cloud infrastructure.

Read full bio

Alexander has worked at Red Hat since 2001, doing development work on desktop and containers, including the creation of flatpak and lots foundational work on Gnome.

Read full bio
Daniel Walsh has worked in the computer security field for over 40 years. Dan is a Senior Distinguished Engineer at Red Hat. He joined Red Hat in August 2001. Dan is the lead architect of the Red Hat Container Runtime Engineering team. Dan has been working on container technologies for 17 years. Dan focusess on the CRI-OvContainer Runtime for Kubernets, Buildah for building container images, Podman for running and managing containers, containers/storage and containers/image. He has led the SELinux project, concentrating on the application space and policy development. Dan helped developed sVirt, Secure Virtualization as well as the SELinux Sandbox. Previously, Dan worked Netect/Bindview's on Vulnerability Assessment Products and at Digital Equipment Corporation working on the Athena Project, AltaVista Firewall/Tunnel (VPN) Products. Dan has a BA in Mathematics from the College of the Holy Cross and a MS in Computer Science from Worcester Polytechnic Institute.
Read full bio

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech