At Red Hat Summit 2025, we introduced Ask Red Hat, a conversational AI designed to be an intelligent front door for our customers. It began as a rapid 12-week build to prove that open source AI could transform the support experience.
Today, Ask Red Hat has evolved from a proof of concept into a sophisticated production reality. As of late 2025, it has served over 50,000 unique users and handled more than 450,000 messages. It is no longer just a standalone tool but a cross-product orchestration layer, integrated directly into docs.redhat.com and new support case creation in the Red Hat Customer Portal. Ask Red Hat is delivering on its goal: Helping customers solve problems the very moment they arise.
The technical engine: Granite and the guardrails of trust
While much of the industry is still experimenting with models in isolation, Red Hat has operationalized the IBM Granite family to solve specific customer pain points. Our leadership in this space is defined by how we balance power with precision:
- The model: We use Granite-3.x 8B-Instruct variant for its high efficiency and precision in RAG (retrieval-augmented generation), with plans to migrate to Granite-4.x Small in the near future.
- The safety net: We’ve implemented a robust safety architecture using Granite Guardian models to serve as "guardrails." These evaluate both user inputs and AI outputs in real-time to prevent jailbreaks and help keep the assistant on-task.
- Precision retrieval: In December 2025, we launched a retrieval optimization phase that resulted in a 45% MRR (mean reciprocal rank) improvement, helping customers get version-appropriate documentation for products every time they’re requested.
Measuring the unseen: The value of AI and Ask Red Hat
The biggest question for the enterprise today is "how do we measure AI success?" As the first step toward an intelligent front door, Ask Red Hat has an impact across almost every team at Red Hat in one way or another. Our original founding objective was to help customers self-solve whenever possible.
To further highlight this goal, our lead back end engineer worked in Red Hat technical support at the start of his career, and once said, “If I can solve a problem myself in five minutes I’d rather do that than wait for Support to call me back, even if it’s only an hour.” If we can help customers when they would otherwise have had to make a support case, they are happy, and we can measure that happiness as cost avoidance within Red Hat Support.
We use a specific internal framework to calculate this success. By measuring instances where a user engages with Ask Red Hat and then chooses not to open a support case, we can quantify the business impact.
The Red Hat cost avoidance formula:
- Total cost avoidance = count of potential cases successfully resolved through customer self-solves using Ask Red Hat x calculated cost of a support case*
*While specific internal costs remain proprietary, this formula allows us to track the financial resilience provided by AI
In 2025 alone, Ask Red Hat contributed approximately $1.5M in cost avoidance through technical support case creation alone. Each of these cases represents a time where a customer or partner was able to take a swing at solving a problem themselves more quickly, without having to wait for even Red Hat’s fast support response times, because Ask Red Hat gave them what they needed. This proves that enterprise AI is not just an efficiency play; it is a foundational shift in how we deliver value across the customer experience, namely by: improving customer time-to-resolution while reducing the load on our support engineers.
Looking ahead: The roadmap to agentic AI
We are rapidly moving from an assistant that simply answers to an agent that acts. Our upcoming roadmap focuses on making the customer experience even easier:
- Single point of entry orchestration layer : We are moving toward a multi-agent experience where Ask Red Hat can coordinate between different specialized AI tools to solve complex, multi-step issues.
- Nearly real-time RAG: We have already optimized our pipeline so that documentation updates are reflected in Ask Red Hat in under four minutes, keeping the AI tool consistently aligned to the latest source of truth.
By grounding our AI strategy in open source, transparent ROI, and rigorous safety guardrails, Red Hat has successfully moved past the experimental phase of AI and into a future of documented customer success.
You can learn more about the Ask Red Hat technical details on our AI system card, as well the Ask Red Hat solution architecture on the Red Hat Architecture Center.
Über den Autor
Matt Ruzicka is a Portfolio Content Strategist for Red Hat’s Intelligent Experience Delivery team with over a decade of experience at Red Hat—spanning AI solutions, customer success, and Technical Account Management. They focus on bridging the gap between complex technology and meaningful customer outcomes.
Ähnliche Einträge
Why the future of AI depends on a portable, open PyTorch ecosystem
Scaling Earth and space AI models with Red Hat AI Inference Server and Red Hat OpenShift AI
Technically Speaking | Build a production-ready AI toolbox
Technically Speaking | Platform engineering for AI agents
Nach Thema durchsuchen
Automatisierung
Das Neueste zum Thema IT-Automatisierung für Technologien, Teams und Umgebungen
Künstliche Intelligenz
Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen
Open Hybrid Cloud
Erfahren Sie, wie wir eine flexiblere Zukunft mit Hybrid Clouds schaffen.
Sicherheit
Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren
Edge Computing
Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen
Infrastruktur
Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen
Anwendungen
Entdecken Sie unsere Lösungen für komplexe Herausforderungen bei Anwendungen
Virtualisierung
Erfahren Sie das Neueste über die Virtualisierung von Workloads in Cloud- oder On-Premise-Umgebungen