Red Hat AI

Build AI for your world

Deliver AI value with the resources you have, the insights you own, and the freedom you need. 

Red Hat® AI is engineered to help you build and run AI solutions that work exactly how your business does—from first experiments to full production.

Red Hat AI product graphic

Flexible AI for the enterprise

To stay consistent across the hybrid cloud, you need a platform that lets you deploy where your data resides. 

Red Hat AI puts you in control of both generative and predictive AI capabilities—in the cloud, on-premise, or at the edge. 

With Red Hat AI, you can stay flexible while delivering fast and scalable inference with the model and accelerator of your choice. 

Red Hat AI includes:

Red Hat AI Inference Server Logo

Red Hat® AI Inference Server optimizes model inference across the hybrid cloud for faster, cost-effective model deployments. 

Powered by vLLM, it includes access to validated and optimized third-party models on Hugging Face. It also includes LLM compressor tools. 

Red Hat Enterprise Linux AI Logo

Red Hat Enterprise Linux® AI is a platform for inference and training of large language models to power enterprise applications.

It includes InstructLab tooling for customizing models, as well as integrated hardware accelerators. 

 + Includes Red Hat AI Inference Server

Red Hat OpenShift AI Logo

Red Hat OpenShift® AI builds on the capabilities of Red Hat OpenShift to provide a platform for managing the lifecycle of generative and predictive AI models at scale. 

Through integrated MLOps and LLMOps capabilities, it offers complete lifecycle management with distributed training, tuning, inference, and monitoring of AI applications across hybrid cloud environments.

 + Includes Red Hat AI Inference Server

 + Includes Red Hat Enterprise Linux AI

Validated performance for real-world impact

Red Hat AI provides access to a set of ready-to-use, validated third-party models that run efficiently on vLLM across our platform.

Use Red Hat third-party validated models to test model performance, optimize inference, and get guidance for cutting through complexity to accelerate AI adoption.

 

What is InstructLab? Video duration: 2:58

Customize LLMs locally with InstructLab

Red Hat’s InstructLab is a community-driven project that makes it easier for developers to experiment with IBM’s Granite models, even for those with minimal machine learning experience.

It’s a great place to start if you want to experiment with the AI model of your choice or fine-tune foundation models on your local hardware.

This removes the cost and resource barriers to experiment with AI models, before you’re ready to bring AI to your enterprise.

More AI partners. More paths forward.

Experts and technologies are coming together so our customers can do more with AI. A variety of technology partners are working with Red Hat to certify their operability with our solutions. 

Solution Pattern

Red Hat AI applications with NVIDIA AI Enterprise

Create a RAG application

Red Hat OpenShift AI is a platform for building data science projects and serving AI-enabled applications. You can integrate all the tools you need to support retrieval-augmented generation (RAG), a method for getting AI answers from your own reference documents. When you connect OpenShift AI with NVIDIA AI Enterprise, you can experiment with large language models (LLMs) to find the optimal model for your application.

Build a pipeline for documents

To make use of RAG, you first need to ingest your documents into a vector database. In our example app, we embed a set of product documents in a Redis database. Since these documents change frequently, we can create a pipeline for this process that we’ll run periodically, so we always have the latest versions of the documents.

Browse the LLM catalog

NVIDIA AI Enterprise gives you access to a catalog of different LLMs, so you can try different choices and select the model that delivers the best results. The models are hosted in the NVIDIA API catalog. Once you’ve set up an API token, you can deploy a model using the NVIDIA NIM model serving platform directly from OpenShift AI.

Choose the right model

As you test different LLMs, your users can rate each generated response. You can set up a Grafana monitoring dashboard to compare the ratings, as well as latency and response time for each model. Then you can use that data to choose the best LLM to use in production.

Download pdf icon

An architecture diagram shows an application built using Red Hat OpenShift AI and NVIDIA AI Enterprise. Components include OpenShift GitOps for connecting to GitHub and handling DevOps interactions, Grafana for monitoring, OpenShift AI for data science, Redis as a vector database, and Quay as an image registry. These components all flow to the app frontend and backend. These components are built on Red Hat OpenShift AI, with an integration with ai.nvidia.com.

Red Hat AI in the real world

Photo of hands with a calculator

Ortec Finance logo

Ortec Finance accelerates growth and time to market 

Ortec Finance, a global technology and solutions provider for risk and return management is serving ML models on Microsoft Azure Red Hat OpenShift and is adopting Red Hat AI.

Figure looking at device
Phoenix logo

Phoenix systems offers next-level cloud computing

Find out how Phoenix Systems is collaborating with Red Hat to offer customers greater choice, transparency, and AI innovation.​

Generic graphic chart

DenizBank logo

Denizbank empowers its data scientists

DenizBank is developing AI models to help identify loans for customers and potential fraud. With Red Hat AI, its data scientists gained a new level of autonomy over their data.

purple circles

icon of stacked servers

RHEL icon

icon of pink cube next to cloud

OpenShift icon

icon of browser with with sparkles representing AI

OpenShift icon

icon of cog

OpenShift icon

Build on a reliable foundation

Enterprises around the world trust our broad portfolio of hybrid cloud infrastructure, application services, cloud-native application development, and automation solutions to deliver IT services on any infrastructure quickly and cost effectively.

Red Hat Enterprise Linux

Support application deployments—from on premise to the cloud to the edge—in a flexible operating environment.

Learn more 

Red Hat OpenShift

Quickly build and deploy applications at scale, while you modernize the ones you already have.

Learn more 

Red Hat Ansible
Automation Platform

Create, manage, and dynamically scale automation across your entire enterprise.

Learn more 

Red Hat AI
 

Tune small models with enterprise-relevant data, and develop and deploy AI solutions across hybrid cloud environments.

Learn more 

Explore more AI resources

How to get started with AI at the enterprise

Get Red Hat Consulting for AI

Maximize AI innovation with open source models

Red Hat Consulting: AI Platform Foundation

Contact Sales

Talk to a Red Hatter about Red Hat AI