Earlier this year, we launched the Red Hat AI quickstart catalog, a collection of ready-to-run blueprints designed to help organizations move from talking about AI to using large language models (LLMs) to solve real-world problems. This provides systems integrators and architects with example AI solutions that Red Hat engineering has tested and streamlined for easy deployment.
Once you've successfully rolled out an interactive solution on Red Hat AI, however, the next question is usually, "How do I protect this in the real world?"
To help answer this, we've expanded the AI quickstarts catalog with one of our first partner-led entries: The F5 Distributed Cloud API Security AI quickstart.
Protecting your AI endpoints
Most organizations have no trouble spinning up a basic chat assistant or a retrieval-augmented generation (RAG) demo. The friction starts when they realize that an inference endpoint is, at its core, an API. APIs are the primary target for modern exploits.
For those of us helping customers architect these systems, security concerns are often what prevent promising pilots from reaching production. This new AI quickstart, collaboratively developed by F5 and Red Hat, helps you get past that hurdle. It demonstrates how to apply enterprise-grade protection before users begin interacting with your AI models.
Inside the F5 Distributed Cloud API Security AI quickstart
The F5 Distributed Cloud API Security AI quickstart is a modular blueprint that integrates F5 Distributed Cloud (XC) Services with the Red Hat AI platform. It's designed to be deployed in under 90 minutes, giving you a fully functional, protected environment to demonstrate:
- Schema validation: So your LlamaStack or vLLM endpoints only process well-formed, authorized requests
- Sensitive data guardrails: Automatically detecting and redacting personally identifiable information (PII) or proprietary data before it ever leaves your environment
- Resource protection: Implementing rate limiting and bot defense so your GPU cycles are used by legitimate users, not malicious scrapers
- Hybrid flexibility: Whether your model is running on-premises or in a public cloud, the architecture remains consistent
Building together
By bringing F5's decades of security expertise to an AI quickstart, we're demonstrating a reusable method for addressing many of these "Day 2" problems.
The goal isn't just to kick the tires, it's to provide a predictable, reusable framework so that when a customer asks how their data will be protected, you'll have a working, demonstrable response.
Get started
You can clone the repository from GitHub and take it for a test drive on your cluster today: Explore the F5 API Security quickstart.
Resource
The adaptable enterprise: Why AI readiness is disruption readiness
About the authors
For Shane Heroux, technology has always been about connections: connecting systems, people, and ideas. His open source journey kicked off in a college dorm room in the mid-90s, tinkering with Slackware just for fun. It wasn't long before he found his way to Red Hat, and he's been an active part of the Linux and open-source communities ever since.
He officially joined the team in 2018, first diving deep into the world of containers as an OpenShift Consultant. He then moved into the partner space as a Technical Account Manager, where he discovered a passion for building success with partners, not just for them.
Today, that focus is his pride and joy. Shane thrives on collaborating with the incredible Red Hat partner ecosystem to design and develop creative solutions that solve real-world problems. For him, it's all about using the power of open, collaborative technology to build a better, more efficient, and more connected world for everyone.
More like this
How does real-world AI deliver value? The Ask Red Hat example
The efficient enterprise: Scaling intelligence with Mixture of Experts
Technically Speaking | Build a production-ready AI toolbox
Technically Speaking | Platform engineering for AI agents
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Virtualization
The future of enterprise virtualization for your workloads on-premise or across clouds