The world of gen AI is moving at lightning speed. For enterprises, navigating the flood of new large language models (LLMs), tools (like Model Context Protocol (MCP) servers), and frameworks can feel overwhelming. How do you choose the right model? How do you empower your teams to experiment and build with the latest innovations in AI without creating organizational barriers?
At Red Hat, we believe the future of AI is open, accessible, and manageable at scale. That's why we’re excited to announce 2 new consolidated dashboard experiences in Red Hat OpenShift AI 3.0: the AI Hub and the Gen AI studio.
These experiences are designed to streamline the entire gen AI lifecycle for enterprises by providing tailored components for the key personas innovating with AI within organizations: platform engineers and AI engineers.
What's new: A closer look
AI hub and gen AI studio work together to create a cohesive, end-to-end workflow for building production-ready AI solutions on a trusted, consistent platform.
AI hub: The platform engineer's control panel
The AI hub is the central point for the management and governance of gen AI assets within OpenShift AI. It empowers platform engineers to discover, deploy, and manage the foundational components their teams need. Key components include:
- Catalog: A curated library where platform engineers can discover, compare, and evaluate a wide variety of models. This helps overcome "model selection paralysis" by providing the data needed to choose the optimal model for any use case.
- Registry: A central repository to register, version, and manage the lifecycle of AI models before they are configured for deployment.
- Deployments: An administrative page to configure, deploy, and monitor the status of models running on the cluster.
Gen AI studio: The AI engineer's innovation workbench
While the AI hub provides control, the gen AI studio delivers a hands-on environment for AI engineers to consume, experiment, and build. It's where innovation takes shape. The studio includes:
- AI asset endpoints: A simple, clear view for AI engineers to see all the deployed and provisioned models and MCP servers (via config map) available for them to use within their projects.
- Playground: An interactive, stateless environment where AI engineers can experiment with deployed assets, test prompts, tune parameters, and evaluate performance for use cases like chat and Retrieval-Augmented Generation (RAG).
This integrated experimentation space is a game-changer. It moves testing out of disparate, local environments and into a standardized, collaborative platform, accelerating the entire development lifecycle. At the same time, it gives AI engineers clear visibility into which approved AI assets they can access and use, ensuring experimentation happens in line with enterprise policies.
Real-world value for the enterprise
By creating a unified experience for both platform and AI engineers, OpenShift AI 3.0 delivers powerful benefits that address common enterprise challenges.
- Streamlined model selection: The AI hub's catalog cuts through the market noise, allowing platform engineers to curate a set of enterprise-ready models that are the best fit for their business needs.
- Centralized and governed AI assets: The platform solves organizational fragmentation by creating a single source of truth for all gen AI assets. Platform engineers can manage, version, and provide intentional access to AI engineers, laying the foundation for full AI governance.
- Accelerated development cycles: The gen AI studio’s playground bridges the gap between discovery and integration. AI engineers can rapidly test and validate models and, with the surfacing of MCP servers, experiment with tool calling in a standardized way. This reduces deployment failures and gets AI-powered applications to production faster.
How to get started
Ready to bring centralized control and accelerated innovation to your AI workflows? The AI hub and gen AI studio are here to help you build your next generation of AI applications on a platform you can trust.
Our goal is to help you write engaging articles that serve our readers' needs. This starts with providing a clear path to get hands-on with our technology.
What’s next?
This is just the beginning. We're continuing to evolve OpenShift AI to support the full spectrum of gen AI. In the future, you can expect AI assets to expand MCP Server management and include other high-priority components like agents, knowledge sources for RAG, and safety guardrails.
We also plan to extend AI asset sovereignty and governance into the AI hub and gen AI studio to provide enterprises fine-grained control of AI asset quotas, permissions, and management to ensure compliance.
The future of enterprise AI is open and collaborative, and OpenShift AI is the platform to get you there.
Resource
The adaptable enterprise: Why AI readiness is disruption readiness
About the authors
My name is Rob Greenberg, Principal Product Manager for Red Hat AI, and I came over to Red Hat with the Neural Magic acquisition in January 2025. Prior to joining Red Hat, I spent 3 years at Neural Magic building and delivering tools that accelerate AI inference with optimized, open-source models. I've also had stints as a Digital Product Manager at Rocketbook and as a Technology Consultant at Accenture.
More like this
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Virtualization
The future of enterprise virtualization for your workloads on-premise or across clouds