CRI-O - OCI-based implementation of Kubernetes Container Runtime Interface Today CRI-O, a project started at Red Hat in 2016 to be an Open Container Initiative-based implementation of the Kubernetes Container Runtime Interface, is being contributed to the Cloud Native Computing Foundation (CNCF). This project joins cornerstone containers and Kubernetes projects we’ve been a part of like etcd and others to join a neutral home for stewardship.

This is a step forward for the containers and CRI-O community because it brings the project into the same home as Kubernetes, which benefits users given its close interdependency. CRI-O and Kubernetes follow the same release cycle and deprecation policy.

CRI-O already has a variety of maintainers outside of Red Hat including Intel and SUSE. Red Hat plans to continue participating in developing CRI-O, especially as a part of our enterprise Kubernetes product, Red Hat OpenShift. With our heritage and dedication to open source software and community-driven development, CRI-O can benefit the community even further within CNCF housed next to Kubernetes.

What is CRI-O? A runtime for Kubernetes

CRI-O is an implementation of the Container Runtime Interface (CRI) in Kubernetes using Open Container Initiative (OCI) images and runtimes. CRI-O versions match Kubernetes versions so it is easy for users to pick the right version of CRI-O for Kubernetes. For example, CRI-O 1.13.x works with Kubernetes 1.13.x, CRI-O 1.12.x works with Kubernetes 1.12.x and so on.

Here is a brief overview of the CRI-O architecture:

CRI-O architecture - OCI-based implementation of Kubernetes Container Runtime Interface

The kubelet talks to CRI-O using the CRI gRPC API. The CRI has an image service and a runtime service. The image service is responsible for pulling down images on the node as needed to run containers in a pod. CRI-O uses the containers/image library to pull images to the node. The runtime service is responsible for running the containers. The runtime service uses the containers/storage library to create Copy-On-Write based root filesystems for the containers.

In our development of the coming OpenShift 4, it configures CRI-O with overlayfs as the Copy-On-Write storage driver. CRI-O uses the runtime-tools OCI-generated library to create an OCI configuration that runc can understand. Finally, it launches the containers using runc or any OCI-compatible runtime like kata containers along with a monitoring process called conmon. conmon is a small monitoring process for containers. It is responsible for monitoring a container to record its exit code, writing logs, handling tty for the containers, service attach clients, reaping processes as well as reporting Out of Memory (OOM) conditions. CRI-O utilizes the container networking interface (CNI) for setting up networking for the containers so CNI plugins such as flannel, Cilium, weave or OpenShift-SDN are supported.

CRI-O for Kubernetes users

CRI-O provides a lightweight runtime for Kubernetes. It is focused on managing and running containers in Kubernetes. This supports the security principle of separation of concerns.

CRI-O aims to be Kubernetes first and CRI-O releases follow Kubernetes in lock-step. Each Pull Request to CRI-O has to pass the Kubernetes e2e test before it is merged. CRI-O releases packages for various Linux distributions, supported by tools such as minikube and kubeadm to setup Kubernetes clusters using CRI-O as the runtime. CRI-O releases support all the Kubernetes supported versions, matching their most recent three minor releases.

The same unmodified CRI-O is supported in Red Hat OpenShift as well. It has been offered as an option to customers since OpenShift 3.9 and is planned as an option in the upcoming OpenShift 4 releases.

Differences between CRI-O and other runtimes

CRI-O limits its scope to that of Kubernetes and focuses on only adding features that are needed by Kubernetes and nothing more. This narrow focus drives stability, performance and security features down the stack, allowing the cloud native ecosystem to reliably focus at the Kubernetes layer and above.

There are other projects such as containerd, Docker daemon, Pouch Container, or Singularity that can provide a CRI socket, but they also accommodate additional use-cases. For example, the docker daemon differs from CRI-O in that it is one large tool that is used for many purposes and by many roles. The docker daemon is used to build containers, manage containers, run containers, and inspect containers. While a developer needs to do all these tasks while developing a containerized application on their laptop, security principles of least privilege are better achieved in production environments by using different tools for different purposes.

CRI-O does include some trouble shooting capabilities, but intentionally does not include container build APIs, instead relying on the Kubernetes CRI-API. For interacting with CRI-O (and similar runtimes) there are tools like the Kubernetes-SIGs `crictl`.

Get involved

We celebrate the contribution of CRI-O to CNCF, as work on the project continues with the community. CRI-O development happens at github.com/cri-o/cri-o. The maintainers are active on #crio on kubernetes.slack.com as well as #cri-o on IRC. All users and contributors are welcome to get involved in CRI-O development. Maintainers are happy to debug issues as well as guide new contributors to CRI-O or with integration CRI-O in other projects.