Domain-specific communication protocols often rely on lower Open Systems Interconnection (OSI) layers. Direct ethernet frame manipulations, VLAN configurations, or ethernet MAC address-based operations are possible, but they come with a wide range of limitations due to the abstractions of the underlying infrastructure details at an higher OSI layer. Kubernetes, for example, requires IP-based communication for load balancing or autoscaling.
Controller Area Networks (CAN) are widely used in automotive and adjacent industries as a way to allow microcontrollers and other devices to communicate with each other's applications. Integrating these workloads into Kubernetes during development time, as part of continuous integration (CI) workflows or interacting with physical devices, is not properly documented. Even though a specific SocketCAN device plugin implementation exists, similar results can be achieved using dynamic resource allocation, a network plugin, or simply admission controllers and init containers. Each implementation comes with its own tradeoff and their usage must be carefully considered, depending on the use case.
This document guides you through setting up a CAN bus workload on Red Hat OpenShift. Then we'll go over how to enable interpod communication. Finally, we'll enhance this setup with traffic control and shaping capabilities to mimic real world conditions.
Enabling CAN bus workloads
SocketCAN is part of the Linux kernel. It employs the Berkeley socket API, the Linux network stack and treats CAN device drivers as network interfaces. This design closely mirrors TCP/IP protocols, making it straightforward for network programmers to adopt CAN sockets. The required kernel modules can
, vcan
, can_dev
and can_raw
are in-tree on Red Hat CoreOS (RHCOS) nodes and just need to be added. This can be done through rolling out a daemon set on all worker nodes that enabled kernel modules on node level. This requires elevated privileges, which should be managed through a dedicated service account, role, and role binding.
Generate YAML to enable CAN based communication, storing the data either in a single or individual files, depending on how modular you need your configuration. First, create the ServiceAccount in sa.yaml
and run oc apply -f sa.yaml
:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: modprobe
Also create the Role in role.yaml
and run oc apply -f role.yaml
:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: use-scc-privileged
rules:
- apiGroups: ["security.openshift.io"]
resourceNames: ["privileged"]
resources: ["securitycontextconstraints"]
verbs: ["use"]
And finally, the create the RoleBinding in rb.yaml
and run oc apply -f rb.yaml
:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: use-scc-privleged
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: use-scc-privileged
subjects:
- kind: ServiceAccount
name: modprobe
The DaemonSet mounts the hosts root directory and changes the apparent root directory for the current running process to it before running modprobe. Create the DaemonSet in ds.yaml
and run oc apply -f ds.yaml
:
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: modprobe
spec:
selector:
matchLabels:
name: modprobe
template:
metadata:
labels:
name: modprobe
spec:
containers:
- command:
- sh
- '-c'
- sleep infinity
image: quay.io/fedora/fedora-minimal:40
imagePullPolicy: Always
name: sleep
resources:
limits:
cpu: 10m
memory: 16M
requests:
cpu: 10m
memory: 16M
securityContext:
capabilities:
drop: ["ALL"]
hostNetwork: true
hostPID: true
initContainers:
- command:
- sh
- '-c'
- chroot /host/ modprobe -a can vcan can_dev can_raw
image: quay.io/fedora/fedora-minimal:40
imagePullPolicy: Always
name: modprobe
resources:
limits:
cpu: 100m
memory: 64M
requests:
cpu: 100m
memory: 64M
securityContext:
capabilities:
drop: ["ALL"]
privileged: true
volumeMounts:
- mountPath: /host
name: host
priorityClassName: system-node-critical
serviceAccountName: modprobe
volumes:
- hostPath:
path: /
name: host
updateStrategy:
type: RollingUpdate
This provides the necessary capabilities to instantiate can
and vcan
interfaces in a Pod network namespace. Instantiating these interfaces can be done in a few different ways, but a simple starting point is using init containers with NET_ADMIN
capabilities added.
Create a dedicated security context constraint (SCC) in scc.yaml
and run oc apply -f scc.yaml
:
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
allowedCapabilities:
- NET_BIND_SERVICE
- NET_ADMIN
apiVersion: security.openshift.io/v1
defaultAddCapabilities: null
fsGroup:
type: MustRunAs
groups: []
kind: SecurityContextConstraints
metadata:
annotations:
kubernetes.io/description: net-admin is forked from restricted-v2 with added NET_ADMIN capabilities.
name: net-admin
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities:
- ALL
runAsUser:
type: MustRunAsRange
seLinuxContext:
type: MustRunAs
seccompProfiles:
- runtime/default
supplementalGroups:
type: RunAsAny
users: []
volumes:
- configMap
- downwardAPI
- emptyDir
- ephemeral
- persistentVolumeClaim
- projected
- secret
Then assign the SCC to the default service account of the namespace where the examples are supposed to be deployed. Create the Role in role-scc.yaml
and run oc apply -f role-scc.yaml
:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: use-scc-net-admin
rules:
- apiGroups: ["security.openshift.io"]
resourceNames: ["net-admin"]
resources: ["securitycontextconstraints"]
verbs: ["use"]
Create the RoleBinding in rb-scc.yaml
and run oc apply -f rb-scc.yaml
:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: use-scc-net-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: use-scc-net-admin
subjects:
- kind: ServiceAccount
name: default
Deploy the workload using vcan, and then inspect the container logs for more detail by creating a Deployment in deploy-vcan.yaml
and running oc apply -f deploy-vcan.yaml
.
kind: Deployment
apiVersion: apps/v1
metadata:
name: vcan
spec:
replicas: 1
selector:
matchLabels:
app: vcan
template:
metadata:
labels:
app: vcan
spec:
initContainers:
- name: vcan
securityContext:
capabilities:
add:
- NET_ADMIN
image: 'quay.io/pwallrab/edgescape/vcan:latest'
args:
- vcan0
containers:
- name: candump
image: 'quay.io/pwallrab/edgescape/sample:latest'
command:
- /bin/sh
- '-c'
- candump vcan0
- name: cangen
image: 'quay.io/pwallrab/edgescape/sample:latest'
command:
- /bin/sh
- '-c'
- cangen vcan0 -v
The quay.io/pwallrab/edgescape/vcan:latest
container image executes the equivalent of the following commands for each interface (vcan0
in this example) name passed as a space separated list:
sudo ip link add name vcan0 type vcan
sudo ip link set dev vcan0 up
The same can be done for physical CAN interfaces on the node level where the link just needs to be moved into the Pods network namespace.
Interpod communication
At this point in the process, communication between participants on a CAN bus can only happen in the same network namespace or for containers on the same pod. This imposes limitations on the usability especially when a remote CAN device needs to be accessed or more complex deployments with strict network isolation between participants are simply required. To address this shortcoming, CAN traffic must be tunneled through Ethernet. Unfortunately, there is no public in-kernel implementation for this yet. However, you can use a client-server type of architecture with cannelloni to solve this problem, and to provide transparent bridging to other cluster nodes over the network. In this case, cannelloni must be started as a side-car container next to the workload, which a client can connect to. It even works in the same network namespace, so the previous example modified with cannelloni looks something like this:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cannelloni
spec:
replicas: 1
selector:
matchLabels:
app: cannelloni
template:
metadata:
labels:
app: cannelloni
spec:
initContainers:
- name: vcan
securityContext:
capabilities:
add:
- NET_ADMIN
image: 'quay.io/pwallrab/edgescape/vcan:latest'
args:
- vcan0 vcan1
containers:
- command:
- /bin/sh
- '-c'
- cannelloni -S s -I vcan0 -p
image: quay.io/pwallrab/edgescape/sample:latest
name: cannelloni-server
- command:
- /bin/sh
- '-c'
- cannelloni -S c -I vcan1 -R localhost
image: quay.io/pwallrab/edgescape/sample:latest
name: cannelloni-client
- command:
- /bin/sh
- '-c'
- candump vcan0
image: quay.io/pwallrab/edgescape/sample:latest
name: candump
- command:
- /bin/sh
- '-c'
- cangen vcan1 -v
image: quay.io/pwallrab/edgescape/sample:latest
name: cangen
Save this YAML in deploy-cannelloni.yaml
and run oc apply -f deploy-cannelloni.yaml
.
This example still runs on the same Pod but uses two different vcan interfaces (vcan0
and vcan1
) that are bridged transparently with cannelloni. A multi-pod setup would need to introduce a Service for cannelloni, because IP addresses are assigned dynamically to each pod and separate client and server pods from each other through affinity rules on cluster node level.
Create a Service in service.yaml
and run oc apply -f service.yaml
:
---
apiVersion: v1
kind: Service
metadata:
name: cannelloni-server
spec:
ports:
- name: tcp
port: 20000
protocol: TCP
targetPort: 20000
selector:
cannelloni: server
type: ClusterIP
Then create a Deployment for the first node in deploy-node0.yaml
and run oc apply -f deploy-node0.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: node0
spec:
replicas: 1
selector:
matchLabels:
app: node0
template:
metadata:
labels:
app: node0
cannelloni: server
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: cannelloni
operator: In
values:
- client
topologyKey: kubernetes.io/hostname
initContainers:
- name: vcan
securityContext:
capabilities:
add:
- NET_ADMIN
image: 'quay.io/pwallrab/edgescape/vcan:latest'
args:
- vcan0
containers:
- command:
- /bin/sh
- '-c'
- cannelloni -C s -I vcan0 -p -l 20000 -s
image: quay.io/pwallrab/edgescape/sample:latest
imagePullPolicy: Always
name: server-tcp
ports:
- containerPort: 20000
name: tcp
protocol: TCP
- command:
- /bin/sh
- '-c'
- candump vcan0
image: quay.io/pwallrab/edgescape/sample:latest
name: candump
And finally, create a Deployment for the second node in deploy-node1.yaml
and run oc apply -f deploy-node1.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: node1
spec:
replicas: 1
selector:
matchLabels:
app: node1
template:
metadata:
labels:
app: node1
cannelloni: client-tcp
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: cannelloni
operator: In
values:
- server
topologyKey: kubernetes.io/hostname
initContainers:
- name: vcan
securityContext:
capabilities:
add:
- NET_ADMIN
image: 'quay.io/pwallrab/edgescape/vcan:latest'
args:
- vcan0
containers:
- command:
- /bin/sh
- '-c'
- cannelloni -C c -I vcan0 -R "$(getent hosts cannelloni-server | awk '{print $1}')" -r 20000
image: quay.io/pwallrab/edgescape/sample:latest
name: client
Traffic control and shaping
Now two pods are running on different nodes, and their workloads (candump and cangen) are communicating over a CAN bus tunneled through Ethernet. Compared to a CAN physical interface, a vcan interface provides a higher data rate, so you might need to introduce variance into the communication, such as package loss, latency, or rate limits. You can use the tc command to process traffic in the Linux kernel's network stack for traffic control and shaping. To apply these settings, add another init container. Because init containers are processed in order, and then wait until the successful completion of the previous one, the tc init container must be added after the one creating the interfaces in the first place.
- name: tc
securityContext:
capabilities:
add:
- NET_ADMIN
imagePullPolicy: Always
image: 'quay.io/pwallrab/edgescape/tc:latest'
args:
- 'qdisc add dev vcan0 root handle 1:0 tbf rate 300kbit latency 100ms
burst 1000'
- 'qdisc add dev vcan0 parent 1:1 handle 10: netem loss 90%'
This introduces a 100ms latency with 300kbit/s rate limit and 90% package loss to the vcan0
interface.
Limitations
You now have established a working interpod CAN communication across multiple nodes. This enables the emulation of more complex network topologies without being constrained by the resources available on a single node. This setup’s biggest limitation can be found in cannelloni’s implementation, which only allows a single client connection to a server. If you need multiple clients to be attached to the same CAN bus, multiple servers must be started, and the service resource must be modified accordingly. This means dynamic network topologies at runtime are not possible.
Additionally, passing through physical CAN interfaces to a pods network namespace means only a single pod can consume the resource directly. Local bridging can address this.
Another issue is that can and vcan interfaces are not properly managed and operationalized but instead provisioned through the usage of heightened privileges (NET_ADMIN
capabilities) as part of an init container. Ideally, users should not be able to use these privileges, but this depends on the actual use case. Gating access to these capabilities can be done as part of the admission webhook or through alternative implementations.
Conclusion
While a CAN bus can be considered as a specifically automotive communication protocol, it's also used in adjacent industries. This approach does not guarantee that CAN frames will reach their destination at all or in the right order. Hence the usage in production environments is not solved by this approach, and needs further investigation. Ultimately, this example shows how domain-specific protocols, such as the CAN bus, that rely on non-ethernet based communication can be integrated into OpenShift, extending the cloud to the device edge, enabling architectures where processing is done centrally.
Über den Autor
Paul Wallrabe, a former consultant, boasts expertise in Kubernetes-backend development and the elimination of toil through automation in developer toolchains. With a solid background in the automotive industry, he has turned his attention to the unique challenges that this sector presents to Linux.
Mehr davon
Nach Thema durchsuchen
Automatisierung
Das Neueste zum Thema IT-Automatisierung für Technologien, Teams und Umgebungen
Künstliche Intelligenz
Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen
Open Hybrid Cloud
Erfahren Sie, wie wir eine flexiblere Zukunft mit Hybrid Clouds schaffen.
Sicherheit
Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren
Edge Computing
Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen
Infrastruktur
Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen
Anwendungen
Entdecken Sie unsere Lösungen für komplexe Anwendungsherausforderungen
Original Shows
Interessantes von den Experten, die die Technologien in Unternehmen mitgestalten
Produkte
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud-Services
- Alle Produkte anzeigen
Tools
- Training & Zertifizierung
- Eigenes Konto
- Kundensupport
- Für Entwickler
- Partner finden
- Red Hat Ecosystem Catalog
- Mehrwert von Red Hat berechnen
- Dokumentation
Testen, kaufen und verkaufen
Kommunizieren
Über Red Hat
Als weltweit größter Anbieter von Open-Source-Software-Lösungen für Unternehmen stellen wir Linux-, Cloud-, Container- und Kubernetes-Technologien bereit. Wir bieten robuste Lösungen, die es Unternehmen erleichtern, plattform- und umgebungsübergreifend zu arbeiten – vom Rechenzentrum bis zum Netzwerkrand.
Wählen Sie eine Sprache
Red Hat legal and privacy links
- Über Red Hat
- Jobs bei Red Hat
- Veranstaltungen
- Standorte
- Red Hat kontaktieren
- Red Hat Blog
- Diversität, Gleichberechtigung und Inklusion
- Cool Stuff Store
- Red Hat Summit