Following the successful launch of the Red Hat AI Factory with NVIDIA, Red Hat is pleased to announce the latest update in our collaboration with NVIDIA – delivering Day 0 support for the NVIDIA Nemotron open model family on the Red Hat AI Factory with NVIDIA. With this effort, we are providing a fully optimized, open source pathway for enterprise-grade generative AI.
From infrastructure to intelligence: Accelerating AI mainstream enterprise adoption
The Red Hat AI Factory with NVIDIA was designed to provide a turnkey environment for developing and deploying AI at scale. Today’s announcement expands this beyond the software and hardware stack, integrating NVIDIA’s high-performance foundation models directly into the hybrid cloud workflow.
As the industry sees the growth of proprietary, closed-box AI, Red Hat is doubling down on open source–from models to software and the surrounding ecosystems–to provide more choice and flexibility for enterprises. Our collaboration with NVIDIA helps ensure that the intelligence running within the Red Hat AI Factory with NVIDIA is transparent, reproducible, and fully optimized for the unique security and sovereignty needs of global enterprise and government customers.
Bringing NVIDIA Nemotron to the enterprise
NVIDIA Nemotron is a family of open models, data, and libraries designed to power transparent, efficient, and specialized agentic AI development across industries. Nemotron open models are optimized for reasoning, retrieval-augmented generation (RAG), and instruction-following tasks. They enable enterprises to accelerate AI innovation across sectors such as financial services, healthcare, and public sector implementations, supporting secure and scalable deployment of generative AI solutions.
To make sure these models are ready for mission-critical environments, we are continuing to deepen engineering workflows with NVIDIA to deliver Day 0 support for new NVIDIA Nemotron models, including Nemotron 3 Super, on vLLM and Red Hat AI. This means that customers will be able to immediately run NVIDIA Nemotron models on Red Hat AI at the moment of release.
As part of this effort, Red Hat will provide rigorous inference performance and accuracy benchmarks on certified GPUs to validate Nemotron models on Red Hat AI platforms, including Red Hat AI Enterprise, and provide clear deployment guidance. From there, Red Hat AI Enterprise acts as the engine for model delivery and validation within the Red Hat AI Factory with NVIDIA. This empowers organizations with enhanced performance and enterprise-ready packaging, with models delivered as OCI artifacts and Modelcars that enable container vulnerability scanning and consistent lifecycle management across the hybrid cloud.
The future is open
The Red Hat AI Factory with NVIDIA is a complete engine for enterprise innovation. By integrating NVIDIA Nemotron open models into this framework, we are giving customers a reliable, high-performance toolkit that they can truly own. Together, we are proving that the future of enterprise AI is open, secure, and built on a foundation of collaboration.
Get started today with the Red Hat AI Factory with NVIDIA and find us at NVIDIA GTC to speak to one of our experts directly.
Risorsa
L'adattabilità enterprise: predisporsi all'IA per essere pronti a un'innovazione radicale
Sugli autori
My name is Rob Greenberg, Principal Product Manager for Red Hat AI, and I came over to Red Hat with the Neural Magic acquisition in January 2025. Prior to joining Red Hat, I spent 3 years at Neural Magic building and delivering tools that accelerate AI inference with optimized, open-source models. I've also had stints as a Digital Product Manager at Rocketbook and as a Technology Consultant at Accenture.
Tyler received a PhD in Computer Science from The University of Texas at Austin, studying high performance dense linear algebra - microkernels, parallelism, and theoretical lower bounds on data movement. Later joined Neural Magic, first working on a graph compiler for sparse neural network inference on CPUs. Now he works on large language model inference in vLLM and llm-d, especially in large scale serving.
Altri risultati simili a questo
Red Hat and NVIDIA: Setting standards for high-performance AI inference
Refactoring at the speed of mission: An "agent mesh" approach to legacy system modernization with Red Hat AI
Technically Speaking | Build a production-ready AI toolbox
Technically Speaking | Platform engineering for AI agents
Ricerca per canale
Automazione
Novità sull'automazione IT di tecnologie, team e ambienti
Intelligenza artificiale
Aggiornamenti sulle piattaforme che consentono alle aziende di eseguire carichi di lavoro IA ovunque
Hybrid cloud open source
Scopri come affrontare il futuro in modo più agile grazie al cloud ibrido
Sicurezza
Le ultime novità sulle nostre soluzioni per ridurre i rischi nelle tecnologie e negli ambienti
Edge computing
Aggiornamenti sulle piattaforme che semplificano l'operatività edge
Infrastruttura
Le ultime novità sulla piattaforma Linux aziendale leader a livello mondiale
Applicazioni
Approfondimenti sulle nostre soluzioni alle sfide applicative più difficili
Virtualizzazione
Il futuro della virtualizzazione negli ambienti aziendali per i carichi di lavoro on premise o nel cloud