The kubelet is the primary "node agent" that runs on each node and provides you a lot of configuration options, but sometimes it still not enough, and you still want to run an additional configuration on the node before starting or stopping the container.
For example, when you want to get isolated CPUs, but you do not want to exclude CPUs from the load balancing forever, you can disable CPU load balancing by demand for pinned CPUs via sched domains. Once the container will start, our code will fetch the container pinned CPUs and will disable the CPU load balancing for specific CPUs via sched domain, and once container removed, our code will restore back the CPU load balancing.
For such use cases, the CRI can help, it provides different mechanisms to inject the desired configuration via a custom script once a container starts or stops.
Important to notice that currently, we have two main CRI implementations, containerd the default CRI implementation used by the vanilla Kubernetes and CRI-O used by default under the OpenShift. Depends on the specific implementation, availability and the configuration can be different.
OCI hooks
OCI hooks mechanism defines several entry points to inject your code. To be more specific runtime-spec version 1.0.0 supports prestart
, poststart
, and poststop
entry points.
- CRI-O supports OCI hooks with the runtime-spec version 1.0.0
- containerd does not support hooks, you can monitor issue https://github.com/containerd/cri/issues/405
Pros
- have a number of entrypoints
- have a filtering mechanism that will prevent to run the hook code on every container
- easy to configure
Cons
- not supported by containerd
- impossible to prevent use of hooks via RBAC
Configuration
Because currently, only CRI-O supports it, I will concentrate on the CRI-O configuration.
To enable hooks under the CRI-O you should specify hooks directory under the /etc/crio/crio.conf
and add the hook JSON specification under hooks directory.
NOTE: Default search paths for hooks are /etc/containers/oci/hooks.d/
and /usr/share/containers/oci/hooks.d/
Example:
# cat /etc/crio/crio.conf
...
hooks_dir=[]
...
# cat /etc/containers/oci/hooks.d/test-hook.json
{
"version": "1.0.0",
"hook": {
"path": "/usr/libexec/oci/hooks.d/oci-systemd-hook" // path to the the hook binary
},
"when": {
"commands": [".*/init$" , ".*/systemd$"] // run this hook only when the cmd of the container ends with the init or systemd
},
"stages": ["prestart", "poststop"] // run this hook before the container starts and after the container stops
}
You can get the container state under the hook script from the stdin, you should get the JSON format input that will allow to parse it and fetch all relevant information regarding the container state.
You can check the link for a better explanation about the hook schema - https://www.mankier.com/5/oci-hooks.
Runtime wrapper
The second option is to create a wrapper around runc
runtime handler that used by containerd and CRI-O.
Pros
- provide more flexible approach to run the code on every life phase of the container
Cons
- requires to provide a wrapper around the
runc
- more complex configuration
- impossible to prevent use of
RuntimeClass
via RBAC
Configuration
Under the node, you should specify an additional runtime handler under the CRI implementation configuration.
CRI-O
# cat /etc/crio/crio.conf
...
[crio.runtime.runtimes.wrapper]
runtime_path=/usr/bin/wrapper
...
Containerd
# cat /etc/containerd/config.toml
...
[plugins.cri.containerd.runtimes.wrapper]
runtime_type = "io.containerd.runc.v1"
pod_annotations = ["*"] // run for every pod
container_annotations = ["*"] // run for every container
[plugins.cri.containerd.runtimes.wrapper.options]
BinaryName="/usr/bin/wrapper"
...
The wrapper script runs each time when the CRI implementation calls to the runtime. It is important to say that it can be any executable file, but it should call the runc
binary with all relevant parameters before the exit.
Example:
#!/bin/bash
echo "$@"
exec /usr/bin/runc "$@"
Under the cluster, you should create a new RuntimeClass resource, that will use the new runtime handler.
Example:
apiVersion: node.k8s.io/v1beta1
kind: RuntimeClass
metadata:
name: wrapper # The name the RuntimeClass will be referenced by
handler: wrapper # The name of the corresponding CRI configuration
scheduling:
node-selector: [...] # The node selector defines where will run pods that are using this runtimeclass
In the end you should specify the runtime class under the pod specification.
Example:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
runtimeClassName: wrapper
...
Now when you will start the pod, the CRI implementation will use our wrapper as a runtime handler and execute our script code.
Conclusion
As you can see we have some good options that allow us to run any additional operating system configurations before the container start or on any other life phase of the container. But you should be aware that currently it impossible to limit usage of hooks or runtime classes via standard Kubernetes mechanisms, so do not overuse them.
Sobre el autor
Navegar por canal
Automatización
Las últimas novedades en la automatización de la TI para los equipos, la tecnología y los entornos
Inteligencia artificial
Descubra las actualizaciones en las plataformas que permiten a los clientes ejecutar cargas de trabajo de inteligecia artificial en cualquier lugar
Nube híbrida abierta
Vea como construimos un futuro flexible con la nube híbrida
Seguridad
Vea las últimas novedades sobre cómo reducimos los riesgos en entornos y tecnologías
Edge computing
Conozca las actualizaciones en las plataformas que simplifican las operaciones en el edge
Infraestructura
Vea las últimas novedades sobre la plataforma Linux empresarial líder en el mundo
Aplicaciones
Conozca nuestras soluciones para abordar los desafíos más complejos de las aplicaciones
Programas originales
Vea historias divertidas de creadores y líderes en tecnología empresarial
Productos
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Servicios de nube
- Ver todos los productos
Herramientas
- Training y Certificación
- Mi cuenta
- Soporte al cliente
- Recursos para desarrolladores
- Busque un partner
- Red Hat Ecosystem Catalog
- Calculador de valor Red Hat
- Documentación
Realice pruebas, compras y ventas
Comunicarse
- Comuníquese con la oficina de ventas
- Comuníquese con el servicio al cliente
- Comuníquese con Red Hat Training
- Redes sociales
Acerca de Red Hat
Somos el proveedor líder a nivel mundial de soluciones empresariales de código abierto, incluyendo Linux, cloud, contenedores y Kubernetes. Ofrecemos soluciones reforzadas, las cuales permiten que las empresas trabajen en distintas plataformas y entornos con facilidad, desde el centro de datos principal hasta el extremo de la red.
Seleccionar idioma
Red Hat legal and privacy links
- Acerca de Red Hat
- Oportunidades de empleo
- Eventos
- Sedes
- Póngase en contacto con Red Hat
- Blog de Red Hat
- Diversidad, igualdad e inclusión
- Cool Stuff Store
- Red Hat Summit