In already a few short years, AI technology has evolved from basic chat completions to autonomous, long-running agents. This poses a challenge for IT teams who need to enable their builders to innovate while also providing guardrails and controls to reduce enterprise risk.
More than just chatbots or assistants, agents are now autonomous entities capable of operating over extended horizons, crafting their own sub-agents, and using professional tools to complete multi-step plans. But as agents leave the developer's laptop and start interacting with production data and external APIs, freedom without guardrails becomes a significant liability. At Red Hat, our AgentOps strategy is built on a simple principle: Bring Your Own Agent (BYOA). You bring your agent and we provide the enterprise-grade platform and tools needed to connect the agents to the security policies, sandboxes, gateways, and more., to make it production-ready.
Today, we are excited to highlight our deepening collaboration with NVIDIA to enable a security-centered, agent-driven digital workforce by integrating the open source NVIDIA OpenShell runtime and NVIDIA AI-Q Blueprint — part of NVIDIA Agent Toolkit — with our Red Hat AI platform.
NVIDIA OpenShell: Infrastructure-enforced agentic safety
One of the biggest gaps in the current AI stack is the lack of a dedicated layer that provides necessary tool and service access to agents while simultaneously enforcing strict security and privacy controls. NVIDIA OpenShell is an open source runtime designed specifically to answer this need, with key features like agent sandboxing, deny-by-default policy and privacy-preserving routing.
NVIDIA OpenShell operates within Kubernetes and can be deployed on Red Hat AI. This deployment allows for the integration of agents with self-hosted models powered by vLLM, along with MCP tools and other AI services, all within a hybrid AI infrastructure. NVIDIA OpenShell helps deliver the necessary security capabilities and functions as an agent sandbox. Building upon this, the new NVIDIA AI-Q Blueprint offers an open reference architecture for a deep research agent. This blueprint utilizes planner and researcher sub-agents to deliver enhanced accuracy, demonstrating the kind of sophisticated agent the Red Hat AI platform can support.
We’re also working with NVIDIA on NVIDIA NemoClaw — an open source stack that simplifies running OpenClaw always-on assistants, more safely, with a single command. As part of the NVIDIA Agent Toolkit, it installs the NVIDIA OpenShell runtime—a security-enhanced environment for running autonomous agents, and open source models like NVIDIA Nemotron.
A growing portfolio of agentic security
This collaboration is the natural next step in Red Hat and NVIDIA’s long-standing collaboration. We have already integrated NVIDIA NeMo Guardrails into Red Hat OpenShift AI to provide programmable conversational rails at the inference boundary. Why does this matter? Because for many enterprises, trust is the primary blocker to AI adoption, not performance or cost. By collaborating with NVIDIA, we are providing the AI factory infrastructure that helps define your agentic workforce as:
- Isolated: A compromised agent cannot reach the host or other agents' data.
- Identifiable: Every agent carries a cryptographic workload identity.
- Observable: Every prompt, tool call, and reasoning step is captured via MLflow Tracing.
Together, we are building a security-enhanced agent environment where millions of knowledge workers can more safely turn everyday work into AI-driven innovation.
리소스
적응형 엔터프라이즈: AI 준비성은 곧 위기 대응력
저자 소개
Joe Fernandes is Vice President and General Manager of the Artificial Intelligence (AI) Business Unit at Red Hat, where he leads product management, product marketing, and technical marketing for Red Hat's AI platforms, including Red Hat Enterprise Linux AI (RHEL AI) and Red Hat OpenShift AI.
유사한 검색 결과
Bringing Nemotron models to the Red Hat AI Factory with NVIDIA
Operationalizing "Bring Your Own Agent" on Red Hat AI, the OpenClaw edition
Technically Speaking | Build a production-ready AI toolbox
Technically Speaking | Platform engineering for AI agents
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
가상화
온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래