Following the successful launch of the Red Hat AI Factory with NVIDIA, Red Hat is pleased to announce the latest update in our collaboration with NVIDIA – delivering Day 0 support for the NVIDIA Nemotron open model family on the Red Hat AI Factory with NVIDIA. With this effort, we are providing a fully optimized, open source pathway for enterprise-grade generative AI.

From infrastructure to intelligence: Accelerating AI mainstream enterprise adoption

The Red Hat AI Factory with NVIDIA was designed to provide a turnkey environment for developing and deploying AI at scale. Today’s announcement expands this beyond the software and hardware stack, integrating NVIDIA’s high-performance foundation models directly into the hybrid cloud workflow.

As the industry sees the growth of proprietary, closed-box AI, Red Hat is doubling down on open source–from models to software and the surrounding ecosystems–to provide more choice and flexibility for enterprises. Our collaboration with NVIDIA helps ensure that the intelligence running within the Red Hat AI Factory with NVIDIA is transparent, reproducible, and fully optimized for the unique security and sovereignty needs of global enterprise and government customers.

Bringing NVIDIA Nemotron to the enterprise

NVIDIA Nemotron is a family of open models, data, and libraries designed to power transparent, efficient, and specialized agentic AI development across industries. Nemotron open models are optimized for reasoning, retrieval-augmented generation (RAG), and instruction-following tasks. They enable enterprises to accelerate AI innovation across sectors such as financial services, healthcare, and public sector implementations, supporting secure and scalable deployment of generative AI solutions.

To make sure these models are ready for mission-critical environments, we are continuing to deepen engineering workflows with NVIDIA to deliver Day 0 support for new NVIDIA Nemotron models, including Nemotron 3 Super, on vLLM and Red Hat AI. This means that customers will be able to immediately run NVIDIA Nemotron models on Red Hat AI at the moment of release. 

As part of this effort, Red Hat will provide rigorous inference performance and accuracy benchmarks on certified GPUs to validate Nemotron models on Red Hat AI platforms, including Red Hat AI Enterprise, and provide clear deployment guidance. From there, Red Hat AI Enterprise acts as the engine for model delivery and validation within the Red Hat AI Factory with NVIDIA. This empowers organizations with enhanced performance and enterprise-ready packaging, with models delivered as OCI artifacts and Modelcars that enable container vulnerability scanning and consistent lifecycle management across the hybrid cloud.

The future is open

The Red Hat AI Factory with NVIDIA is a complete engine for enterprise innovation. By integrating NVIDIA Nemotron open models into this framework, we are giving customers a reliable, high-performance toolkit that they can truly own. Together, we are proving that the future of enterprise AI is open, secure, and built on a foundation of collaboration.

Get started today with the Red Hat AI Factory with NVIDIA and find us at NVIDIA GTC to speak to one of our experts directly. 

리소스

적응형 엔터프라이즈: AI 준비성은 곧 위기 대응력

Red Hat의 COO 겸 CSO인 Michael Ferris가 쓴 이 e-Book은 오늘날 IT 리더들이 직면한 AI의 변화와 기술적 위기의 속도를 살펴봅니다.

저자 소개

My name is Rob Greenberg, Principal Product Manager for Red Hat AI, and I came over to Red Hat with the Neural Magic acquisition in January 2025. Prior to joining Red Hat, I spent 3 years at Neural Magic building and delivering tools that accelerate AI inference with optimized, open-source models. I've also had stints as a Digital Product Manager at Rocketbook and as a Technology Consultant at Accenture.

Tyler received a PhD in Computer Science from The University of Texas at Austin, studying high performance dense linear algebra - microkernels, parallelism, and theoretical lower bounds on data movement. Later joined Neural Magic, first working on a graph compiler for sparse neural network inference on CPUs. Now he works on large language model inference in vLLM and llm-d, especially in large scale serving.

UI_Icon-Red_Hat-Close-A-Black-RGB

채널별 검색

automation icon

오토메이션

기술, 팀, 인프라를 위한 IT 자동화 최신 동향

AI icon

인공지능

고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트

open hybrid cloud icon

오픈 하이브리드 클라우드

하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요

security icon

보안

환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보

edge icon

엣지 컴퓨팅

엣지에서의 운영을 단순화하는 플랫폼 업데이트

Infrastructure icon

인프라

세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보

application development icon

애플리케이션

복잡한 애플리케이션에 대한 솔루션 더 보기

Virtualization icon

가상화

온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래