Jump to section

AI/ML on Red Hat OpenShift

Copy URL

AI/ML on Red Hat® OpenShift®  accelerates AI/ML workflows and the delivery of AI-powered intelligent applications with self-managed Red Hat OpenShift, or our AI/ML cloud service.

Red Hat OpenShift includes key capabilities to enable machine learning operations (MLOps) in a consistent way across datacenters, public cloud computing, and edge computing.

By applying DevOps and GitOps principles, organizations automate and simplify the iterative process of integrating ML models into software development processes, production rollout, monitoring, retraining, and redeployment for continued prediction accuracy. 

A multi-phase process to obtain the power of large volumes and a variety of data, abundant compute, and open source machine learning tools to build intelligent applications.

At a high level, there are four steps in the lifecycle:

  1. Gather and prepare data to make sure the input data is complete, and of high quality
  2. Develop model, including training, testing, and selection of the model with the highest prediction accuracy
  3. Integrate models in application development process, and inferencing
  4. Model monitoring and management, to measure business performance and address potential production data drift
Data acquisition and preparation
ML modeling
ML model deployment
ML model monitoring and management

Data scientists are primarily responsible for ML modeling to ensure the selected model continues to provide the highest prediction accuracy. 

The key challenges data scientists face are:

  • Selecting & deploying the right ML tools (ex. Apache Spark, Jupyter notebook TensorFlow, PyTorch, etc.)
  • Complexities and time required to train, test, select, and retrain the ML model that provides the highest prediction accuracy
  • Slow execution of modeling and inferencing tasks because of lack of hardware acceleration
  • Repeated dependency on IT operations to provision and manage infrastructure
  • Collaborating with data engineers and software developers to ensure input data hygiene, and successful ML model deployment in app dev processes

Containers and Kubernetes are key to accelerating the ML lifecycle as these technologies provide data scientists the much needed agility, flexibility, portability, and scalability to train, test, and deploy ML models.

Red Hat® OpenShift® is the industry's leading containers and Kubernetes hybrid cloud platform. It provides all these benefits, and through the integrated DevOps capabilities (e.g. OpenShift Pipelines, OpenShift GitOps, and Red Hat Quay) and integration with hardware accelerators, it enables better collaboration between data scientists and software developers, and accelerates the roll out of intelligent applications across hybrid cloud (data center, edge, and public clouds).

Red Hat OpenShift Data Science

Red Hat OpenShift AI is an AI-focused portfolio that provides tools across the full lifecycle of AI/ML experiments and models and includes Red Hat OpenShift Data Science.

Red Hat OpenShift Data Science is a self-managed and managed cloud service for data scientists and developers of intelligent applications. It provides a fully supported sandbox in which to rapidly develop, train, and test machine learning (ML) models in the public cloud before deploying in production.

    Empower data scientists

    • Self-service, consistent, cloud experience for data scientists across the hybrid cloud
    • Empower data scientists with the flexibility and portability to use the containerized ML tools of their choice to quickly build, scale, reproduce, and share ML models.
    • Use the most relevant ML tools via Red Hat certified Kubernetes Operators for both self-managed and our AI cloud service option.
    • Eliminate dependency on IT to provision infrastructure for iterative, compute-intensive ML modeling tasks.
    • Eliminate "lock-in" concerns with any particular cloud provider, and their menu of ML tools.
    • Tight integration with CI/CD tools allows ML models to be quickly deployed iteratively, as needed.

    Accelerate compute-intensive ML modeling jobs

    Integrations with popular hardware accelerators such as NVIDIA GPUs via Red Hat certified GPU operator means that OpenShift can seamlessly meet the high compute resource requirements to help select the best ML model providing the highest prediction accuracy, and ML inferencing jobs as the model experiences new data in production.

    Develop intelligent apps

    OpenShift’s built-in  DevOps capabilities enable MLOps to speed up delivery of AI-powered applications and simplify the iterative process of integrating ML models and continued redeployment for prediction accuracy.    

    Extending OpenShift DevOps automation capabilities to the ML lifecycle enables collaboration between data scientists, software developers, and IT operations so that ML models can be quickly integrated into the development of intelligent applications. This helps boost productivity, and simplify lifecycle management for ML powered intelligent applications.

    • Building from the container model images registry with OpenShift Build.
    • Continuous, iterative development of ML model powered intelligent applications with OpenShift Pipelines.
    • Continuous deployment automation for ML models powered intelligent applications with OpenShift GitOps.
    • An image repository to version model container images and microservices with Red Hat Quay.

    OpenShift is helping organizations across various industries to accelerate business and mission critical initiatives by developing intelligent applications in the hybrid cloud. Some example use cases include fraud detection, data driven health diagnostics, connected cars, oil and gas exploration, automated insurance quotes, and claims processing.

    HCA Healthcare logo and BMW Group logo

    Red Hat Data Services was built to address petabyte-scale storage requirements in the ML lifecycle, from data ingestion and preparation, ML modeling, to the inferencing phase. Included in the Red Hat Data Services portfolio is Red Hat Ceph Storage, an open source software defined storage system which provides comprehensive support for S3 object, block, and file storage, and delivers massive scalability on industry standard commodity hardware.

    For example, you can present scalable Ceph storage to containerized Jupyter notebooks on OpenShift via S3 or persistent volumes.

    Turkcell, the leading mobile phone operator in Turkey, deployed Red Hat OpenShift as the foundation for its AI-powered application workloads. OpenShift allowed them to create a responsive infrastructure to deliver innovative AI applications faster, cutting provisioning times from months to seconds. This reduced AI development and operations costs by 70%.​

    Royal Bank of Canada and its AI research institute Borealis AI partnered with Red Hat and NVIDIA to develop a new AI computing platform designed to transform the customer banking experience and help keep pace with rapid technology changes and evolving customer expectations.

    Open Data Hub logo

    Open Data Hub Project is a functional architecture based on Red Hat OpenShift, Red Hat Ceph Storage, Red Hat AMQ Streams, and several upstream open source projects to help build an open ML platform with the necessary ML tooling.

    The combined power of Red Hat OpenShift and NVIDIA AI Enterprise software suite running on NVIDIA-Certified Systems offers a scalable platform that helps accelerate a diverse range of AI use cases. This platform includes key technologies from NVIDIA and Red Hat to securely deploy, manage, and scale AI workloads consistently across the hybrid cloud, on bare metal, or virtualized environments.

    Transformative AI/ML use cases are occurring across healthcare, financial services, telecommunications, automotive, and other industries. Red Hat has cultivated a robust partner ecosystem to offer complete solutions for creating, deploying, and managing ML and deep learning models for AI-powered intelligent applications.

    Keep reading


    What are Red Hat OpenShift Operators?

    Red Hat OpenShift Operators automate the creation, configuration, and management of instances of Kubernetes-native applications.


    Why choose Red Hat OpenShift Serverless?

    Red Hat OpenShift Serverless extends Kubernetes in order to deploy and manage serverless workloads.


    Why choose Red Hat OpenShift Service Mesh?

    Red Hat OpenShift Service Mesh gives you a uniform way to connect, manage, and observe microservices-based applications.

    More about OpenShift


    An enterprise application platform with a unified set of tested services for bringing apps to market on your choice of infrastructure.

    A fully supported sandbox in which to rapidly develop, train, and test machine learning (ML) models.

    Software-defined storage that gives data a permanent place to live as containers spin up and down and across environments.

    A single console, with built-in security policies, for controlling Kubernetes clusters and applications.



    Free training course

    Running Containers with Red Hat Technical Overview

    Free training course

    Developing Cloud-Native Applications with Microservices Architectures

    Free training course

    Containers, Kubernetes and Red Hat OpenShift Technical Overview

    Interactive labs

    Learn about Red Hat® OpenShift with step-by-step interactive lessons designed for hands-on practitioners.