Model Context Protocol (MCP) has moved fast, and thousands of MCP servers now exist across the ecosystem. What started as an open source project from Anthropic in late 2024 is now governed by the Agentic AI Foundation under the Linux Foundation with over 140 member organizations. Red Hat joined the AAIF as a Gold Member earlier this year, alongside the foundation’s work to advance open standards for agentic AI. Earlier this year, the MCP Dev Summit in New York had over 1,200 attendees, gathered to discuss the protocols evolution, and running MCP in production at scale. Thousands of MCP servers now exist across the ecosystem, and the number is growing weekly.
For enterprises, the question of MCP adoption has evolved into how to do it safely. MCP servers can give AI agents access to lots of tools and data, but without a governance layer, there’s no consistent way to control who can access what, enforce rate limits, or apply security policy. However, these are all familiar challenges. They are the same ones platform teams face with existing application connectivity ecosystems.
That’s why we built an MCP gateway, which is now available as a technology preview in Red Hat Connectivity Link.
What MCP gateway does
MCP gateway sits between AI agents and the MCP servers they connect to. It handles traffic control at the infrastructure layer, so your AI platform teams can focus on the AI lifecycle, and it provides a single, managed entry point that federates multiple MCP servers behind one gateway endpoint. Agents get a unified view of all available tools across servers, while platform teams get the controls they need. The list of capabilities include:
- MCP server federation: Aggregate tools from multiple MCP servers behind a single endpoint, making it easier for agents to discover and call tools without needing to know about all servers
- Authentication and authorization: Control access to MCP servers and tools using the same mechanisms you already use with Connectivity Link
- Horizontal scaling: Use a redis-backed session store that enables multi-replica deployments so the gateway can scale alongside your workloads
- Virtual servers: Slice your tools list into smaller, more focused groupings for easier delegation and token efficiency
The key piece that enables your existing API gateway to start handling MCP is creating an MCPGatewayExtension resource. Here’s what it looks like:
apiVersion: mcp.kuadrant.io/v1alpha1
kind: MCPGatewayExtension
metadata:
# Extend an existing gateway with MCP capabilities
name: mcp-extension
namespace: gateway-system
spec:
# Points to your existing Gateway API gateway resource
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: mcp-gateway
namespace: gateway-systemNote the targetRef field that points to your existing gateway resource. Once created, the gateway is configured to parse MCP traffic. To understand how an MCP server is registered and secured, let’s look at an example MCPServerRegistration and AuthPolicy resource:
apiVersion: mcp.kuadrant.io/v1alpha1
kind: MCPServerRegistration
metadata:
# Register a GitHub MCP server behind the gateway
name: github-mcp
namespace: mcp-servers
spec:
# Prefix added to all tools from this server (e.g. github_search_repos)
toolPrefix: "github_"
# Points to the HTTPRoute that defines how traffic reaches this MCP server
targetRef:
group: "gateway.networking.k8s.io"
kind: "HTTPRoute"
name: "github-mcp-route"
namespace: "mcp-servers"The MCPServerRegistration resource targets your HTTPRoute resource in front of your MCP server. You can optionally configure a prefix for all discovered tools in this MCP server, should they collide with tool names from other servers.
apiVersion: kuadrant.io/v1
kind: AuthPolicy
metadata:
name: mcp-auth
namespace: gateway-system
spec:
# Attaches to the gateway's MCP listener
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: mcp-gateway
sectionName: mcp
defaults:
# Skip auth for OAuth discovery endpoints
when:
- predicate: "!request.path.contains('/.well-known')"
rules:
authentication:
# Validate JWTs issued by the Keycloak MCP realm
"keycloak":
jwt:
issuerUrl: http://keycloak.example.com/realms/mcp
response:
# Return OAuth-compliant 401 with resource metadata for auto-discovery
unauthenticated:
code: 401
headers:
"WWW-Authenticate":
value: Bearer
resource_metadata=http://mcp.example.com/.well-known/oauth-protected-resource/mcp
body:
value: |
{
"error": "Unauthorized",
"message": "Authentication required."
}The AuthPolicy targets the gateway, adding OAuth JWT validation issued by your Keycloak instance.
For the full list of capabilities and guides, including how to extend the AuthPolicywith authorization, check out the release notes and documentation.
Built on standards and Red Hat Connectivity Link
MCP gateway is built on Gateway API, the Kubernetes-native standard for ingress and traffic management. This is the same standards-based approach that underpins the rest of Connectivity Link. Policy attachment, the Gateway API pattern for applying auth, rate limiting and other policies to traffic, works the same way for MCP as it does for your HTTP and Google Remote Procedure Call (gRPC) APIs. There’s no separate system to learn. The diagram below shows how MCP traffic is just another step in the flow through the gateway.
Architecture diagram showing MCP traffic flow from an AI agent through the gateway, being decoded, with auth and rate limiting applied, and routing on to MCP servers
If you’re already using Connectivity Link for API management and traffic policies, MCP gateway extends that same governance model to AI agent traffic.
When used within Red Hat OpenShift AI, the MCP gateway's capabilities expand to support more robust enterprise AgentOps and governance. OpenShift AI introduces a developer preview MCP catalog in the AI hub, creating a centralized, governed space to deploy verified assets. The platform adds advanced security features like identity-based tool filtering, so agents can only access authorized tools, and mandatory approvals for sensitive calls. Furthermore, MLflow integration provides end-to-end agent traceability, logging every large language model (LLM) call and tool execution for comprehensive observability. Combined with Connectivity Link, autonomous workflows remain fully observable, auditable, and aligned with enterprise security policies.
Open source first
MCP gateway is built on the upstream Kuadrant MCP gateway project. We develop in the open, and we’re actively tracking the MCP specification as it evolves. Support for elicitation has already landed, with further features like prompts and resources federation coming soon. As the InfoQ coverage of MCP Dev Summit noted, the gateway pattern has emerged as the architectural consensus among enterprise adopters. We’re building on that consensus, and we want to work with the community to get it right.
To get involved upstream, visit the MCP gateway project on GitHub or join the Kuadrant community.
Get started
MCP gateway is available now as a technology preview in Red Hat Connectivity Link. To try it out, follow the installation guide for Red Hat OpenShift.
Resource
Get started with AI for enterprise organizations: A beginner’s guide
About the author
David Martin has been working in the area of managed services since 2009. Ranging from a mobile backend platform based on Node.js and Linux containers, to API and Integration products running on customer OpenShift clusters. His role has revolved around core engineering, with a hint of on-call operations, SRE methodology and observability. More recently he has been involved in the area of multicluster Kubernetes and API management.
More like this
Accelerating Careers in Just One Year: What Is the Appeal of the “FASTER Program” for Its First Participants?
Designing multitenant GPU infrastructure: Isolation across virtualization and Kubernetes platforms
Technically Speaking | Build a production-ready AI toolbox
Technically Speaking | Platform engineering for AI agents
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Virtualization
The future of enterprise virtualization for your workloads on-premise or across clouds