Abonnez-vous à notre blog

After hitting 1.0 in October of last year and being shipped as generally available (GA) in OpenShift 3.9, CRI-O has reached another important milestone—it’s now being used in production for many workloads running on OpenShift Online Starter accounts using OpenShift 3.10. Using CRI-O in a real-world production environment with diverse Kubernetes workloads is an important part of the development feedback loop for improving and extending CRI-O and OpenShift.

What is CRI-O?

To recap a brief history of CRI-O, the project was initially introduced as the “Open Container Initiative Daemon” (OCID) in September 2016. It was renamed shortly after to acknowledge its relationship with Kubernetes’ Container Runtime Interface (CRI), as well as its relationship with the Open Container Initiative Standards.

At its core, CRI-O is a lightweight container engine for Kubernetes that looks to cut out any extraneous tooling and simply serve as a way to run Docker/OCI-compliant Linux containers with OCI-compliant container runtimes. Most Kubernetes users, including OpenShift users, don’t care about the container engine itself – so long as it works, they don’t really want to think about it.

And that’s one of the CRI-O project’s goals, to be “boring.” CRI-O is optimized for Kubernetes. The project is committed to ensuring that CRI-O passes Kubernetes tests, and to have CRI-0 work with any compliant container registry and run any OCI-compliant container.

CRI-O in production

OpenShift Online’s free Starter account allows developers to get hands-on experience with OpenShift quickly, without needing to stand up an instance on their own. Behind the scenes, it is actually a set of OpenShift clusters that provide the service. The OpenShift Online operations team has now transitioned the compute nodes for entire clusters to run CRI-O with no disruption to end-users.

In true cloud fashion, the OpenShift Online operations team was able to release a canary deployment of CRI-O in production, side by side with Docker, transparent to the end user and then expand the deployment to cover entire clusters. Note that the OpenShift Online operations team uses the same methods as our customers to deploy and manage OpenShift Container Platform in their own environments. This provides an additional layer of testing and observation to make OpenShift production-ready at scale.

In fact in the short time that CRI-O has been in production, the potential reduction in support burden looks promising. This is a testament to stability and security features of CRI-O, but Red Hat continues to support the Docker engine shipped with Red Hat Enterprise Linux 7 for OpenShift, so Red Hat Enterprise Linux and OpenShift Container Platform users can rest assured that their existing usage of Docker is well-supported.

By kicking off our production use of CRI-O with OpenShift Online, the operations team is collecting important data on how CRI-O handles in a real-world use case. This data will be fed back into the upstream CRI-O project as bug and security fixes, as well as new features that are useful for the entire community.

“It’s really exciting to see CRI-O being used more and more widely,” says Derek Carr, an OpenShift architect at Red Hat.  “We test CRI-O extensively before each stable release, and our user and contributor base have grown. But there’s nothing like putting work into multiple large-scale production environment to get feedback and ensure it’s ready for real-world use.”

Using CRI-O today

CRI-O was declared GA in OpenShift 3.9, which means that customers can start using it today in their own environments. Scott McCarty, Principal Product Manager - Containers, Red Hat, has an excellent post that explains the steps to enable CRI-O in OpenShift Container Platform. Docker remains the default, and customers can fine-tune whether CRI-O runs on all nodes or just some.

As always, CRI-O plans to continue releasing alongside Kubernetes, and providing updates for the past three major versions of Kubernetes in upstream. The CRI-O project welcomes new contributors and we’d like to thank our current contributors for their assistance in reaching this milestone. If you would like to contribute, or follow development, head to CRI-O project’s GitHub repository and follow the CRI-O blog.


À propos de l'auteur

Joe Brockmeier is the editorial director of the Red Hat Blog. He also acts as Vice President of Marketing & Publicity for the Apache Software Foundation.

Brockmeier joined Red Hat in 2013 as part of the Open Source and Standards (OSAS) group, now the Open Source Program Office (OSPO). Prior to Red Hat, Brockmeier worked for Citrix on the Apache OpenStack project, and was the first OpenSUSE community manager for Novell between 2008-2010. 

He also has an extensive history in the tech press and publishing, having been editor-in-chief of Linux Magazine, editorial director of Linux.com, and a contributor to LWN.net, ZDNet, UnixReview.com, and many others. 

Read full bio

Parcourir par canal

automation icon

Automatisation

Les dernières actualités en matière de plateforme d'automatisation qui couvre la technologie, les équipes et les environnements

AI icon

Intelligence artificielle

Actualité sur les plateformes qui permettent aux clients d'exécuter des charges de travail d'IA sur tout type d'environnement

cloud services icon

Services cloud

En savoir plus sur notre gamme de services cloud gérés

security icon

Sécurité

Les dernières actualités sur la façon dont nous réduisons les risques dans tous les environnements et technologies

edge icon

Edge computing

Actualité sur les plateformes qui simplifient les opérations en périphérie

Infrastructure icon

Infrastructure

Les dernières nouveautés sur la plateforme Linux d'entreprise leader au monde

application development icon

Applications

À l’intérieur de nos solutions aux défis d’application les plus difficiles

Original series icon

Programmes originaux

Histoires passionnantes de créateurs et de leaders de technologies d'entreprise