For decades, Red Hat has been focused on providing the foundation for enterprise technology — a flexible, more consistent, and open platform. Today, as AI moves from a science experiment to a core business driver, that mission is more critical than ever. The challenge isn't just about building AI models and AI-enabled applications; it’s about making sure the underlying infrastructure is ready to support them at scale, from the datacenter to the edge.

This is why I'm so enthusiastic about the collaboration between Red Hat and NVIDIA. We've long worked together to bring our technologies to the open hybrid cloud, and our new agreement to distribute the NVIDIA CUDA Toolkit across the Red Hat portfolio is a testament to that collaboration. This isn't just another collaboration; it's about making it simpler for you to innovate with AI, no matter where you are on your journey.

Why this matters: Simplicity and consistency

Today, one of the most significant barriers to AI adoption isn't a lack of models or compute power, but rather the operational complexity of getting it all to work together. Engineers and data scientists shouldn't have to spend their time managing dependencies, hunting for compatible drivers, or figuring out how to get their workloads running reliably on different systems.

Our new agreement with NVIDIA addresses this head-on. By distributing the NVIDIA CUDA Toolkit directly within our platforms, we're removing a major point of friction for developers and IT teams. You will be able to get the essential tools for GPU-accelerated computing from a single, trusted source. This means:

  • A streamlined developer experience. Developers can now access a complete stack for building and running GPU-accelerated applications directly from our repositories, which simplifies installation and provides automatic dependency resolution.
  • Operational consistency. Whether you're running on-premise, in a public cloud, or at the edge, you can rely on a more consistent, tested, and supported environment for your AI workloads. This is the essence of the open hybrid cloud.
  • A foundation for the future. This new level of integration sets the stage for future collaboration, enabling Red Hat’s platforms to seamlessly work with the latest NVIDIA hardware and software innovations as they emerge.

We are bringing this to life across our portfolio, including Red Hat Enterprise Linux (RHEL)Red Hat OpenShift and Red Hat AI.

Our open source approach to AI

This collaboration with NVIDIA is also an example of Red Hat's open source philosophy in action.  We're not building a walled garden. Instead, we're building a bridge between two of the most important ecosystems in the enterprise: the open hybrid cloud and the leading AI hardware and software platform. Our role is to provide a more stable and reliable platform that lets you choose the best tools for the job, all with an enhanced security posture.

The future of AI is not about a single model, a single accelerator, or a single cloud. It's about a heterogeneous mix of technologies working together to solve real-world problems. By integrating the NVIDIA CUDA Toolkit directly with our platforms, we're making it easier for you to build that future. 


저자 소개

Ryan King is Vice President of AI and Infrastructure for the Partner Ecosystem Success organization at Red Hat. In this role, King leads a team in shaping Red Hat's AI strategy with key infrastructure and hardware providers to drive go-to-market engagements and customer success with AI. 

UI_Icon-Red_Hat-Close-A-Black-RGB

채널별 검색

automation icon

오토메이션

기술, 팀, 인프라를 위한 IT 자동화 최신 동향

AI icon

인공지능

고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트

open hybrid cloud icon

오픈 하이브리드 클라우드

하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요

security icon

보안

환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보

edge icon

엣지 컴퓨팅

엣지에서의 운영을 단순화하는 플랫폼 업데이트

Infrastructure icon

인프라

세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보

application development icon

애플리케이션

복잡한 애플리케이션에 대한 솔루션 더 보기

Virtualization icon

가상화

온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래