In a previous article, The strategic choice: Making sense of LLM customization, we explored AI prompting as the first step in adapting large language models (LLMs) to real-world use. Prompting changes how an AI model responds in terms of tone, structure, and conversational behavior without changing what the model knows.

That strategy is effective until the model requires specific information it did not encounter during its initial training.

At that point, the limitation is no longer conversational—it is architectural.

Retrieval-augmented generation (RAG) helps address that limitation. Not by making models smarter, but by changing the systems they operate within.

From conversation to context

Prompting shapes behavior. It influences how a model reasons and responds, but it does not expand the model’s knowledge. LLMs are trained on broad, mostly public datasets that are frozen in time and detached from any single organization’s internal reality.

As long as an application can tolerate that gap, prompting may be enough.

Once an application depends on current documentation, internal policies, proprietary data, or rapidly evolving domain knowledge, however, prompting alone begins to fail. Prompts grow longer. Instructions become defensive. The model is asked to “be careful” about facts it cannot verify.

RAG addresses this by treating context as a first-class architectural concern.

Instead of relying exclusively on parametric knowledge encoded in model weights, a RAG system retrieves relevant external information at query time and injects it directly into the model’s available input data. In a RAG workflow, the model retrieves relevant documents and then reasons over this retrieved context to generate an answer, grounding its response in your own data rather than only its training.

  • Prompting shapes how a model responds
  • RAG determines what additional information the model will bring into the process

RAG as a system, not a feature

At a technical level, RAG introduces a second system alongside the model itself, a retrieval layer that manages knowledge independently of generation.

This layer typically performs work the model is poorly suited for:

  • Processing and structuring large document collections
  • Maintaining up-to-date knowledge as source data changes
  • Selecting relevant information under strict token constraints
  • Enforcing access control and data boundaries

Crucially, this work happens outside the model.

In practice, RAG systems naturally divide into two phases:

  • Index-time, where documents are ingested, segmented, embedded, and stored in a retrieval-optimized form
  • Query-time, where a user question triggers the retrieval, selection, and assembly of context for generation

The point where these phases meet is the model’s context window. This window is finite, expensive, and shared between system instructions, user input, and retrieved content. RAG exists so the right information occupies that space at the last possible moment.

Architecturally, this creates a clear separation of concerns:

  • Retrieval systems manage freshness, scale, and provenance
  • Models focus on synthesis and language generation

This separation is what makes RAG powerful and what introduces new complexity.

Why retrieval is harder than it looks

RAG is often described as “grounding” model outputs, but retrieval itself is not deterministic. Most retrieval systems rely on similarity, not truth. They return content that is close in representation space, not guaranteed to be correct for a given question.

This creates what many teams encounter as the retrieval gap.

Even when the correct information exists in the knowledge base, the system may retrieve something adjacent but incomplete, outdated, or subtly wrong. When that happens, the model can produce a confident, articulate answer that is now grounded in the wrong source.

No amount of prompt engineering can fix this failure mode. If the context window is wrong, even a well-aligned model will reason incorrectly.

This is why RAG systems evolve beyond naive implementations. Techniques such as query transformation, hybrid retrieval, re-ranking, and context filtering exist not to improve generation, but to reduce the probability of retrieving the wrong material in the first place.

Seen this way, “advanced RAG” is less about sophistication and more about compensation for the probabilistic nature of retrieval itself.

Why enterprises adopt RAG anyway

Despite its complexity, RAG solves problems that prompting alone cannot.

RAG is particularly valuable when applications require:

  • Access to proprietary or internal knowledge
  • Data that changes frequently
  • Clear separation between model behavior and business data
  • Explicit traceability and source attribution

That last point is often decisive. In many enterprise environments, the ability to cite a specific document, policy, or page is what makes a system usable at all. RAG makes it possible to answer not only what the system said, but where the information came from.

In this sense, RAG is less about intelligence and more about control. The model is no longer expected to know everything. It is simply expected to use the information the system provides.

Where RAG stops

RAG changes what information a model can reference. It does not change its underlying architectural logic or weights.

A model augmented with retrieval will still follow its original probabilistic patterns, exhibit the same biases, and default to its pre-trained tone and style. If a model consistently misinterprets retrieved information or applies domain concepts incorrectly, retrieval alone will not fix that.

This boundary matters.

RAG addresses knowledge access but does not guarantee behavioural consistency or generative style. If an application requires a model to be more cautious or adhere strictly to domain-specific logic, retrieval must be paired with techniques that modify the model's underlying weights.

Recognizing this boundary prevents over-engineering retrieval pipelines to solve problems that are fundamentally about model behavior.

Looking ahead

So far, we’ve treated RAG as an architectural idea—why it exists, what it enables, and where it breaks down.

In our next article, we’ll look at what actually happens when teams try to run retrieval in production. We’ll explore how naive RAG evolves into multistage pipelines, why agent-driven retrieval emerges, and how real systems manage the tradeoffs between accuracy, latency, and cost.

After that, we’ll turn to the final layer of customization: techniques that reshape the model itself.

In practice, building effective AI systems isn’t about choosing one approach. It’s about understanding where each layer fits and when to move on.

Ready to put these concepts into practice? Dive into the RAG AI quickstart in the catalog or read the article to get your first pipeline running. And be sure to register for Red Hat Summit 2026 to connect with our team and explore the future of production AI.

リソース

適応力のある企業:AI への対応力が破壊的革新への対応力となる理由

Red Hat の COO 兼 CSO である Michael Ferris (マイケル・フェリス) が執筆したこの e ブックでは、今日の IT リーダーが直面している AI による変化のペースと技術的な破壊的革新について解説しています。

執筆者紹介

Frank La Vigne is a seasoned Data Scientist and the Principal Technical Marketing Manager for AI at Red Hat. He possesses an unwavering passion for harnessing the power of data to address pivotal challenges faced by individuals and organizations.
A trusted voice in the tech community, Frank co-hosts the renowned “Data Driven” podcast, a platform dedicated to exploring the dynamic domains of Data Science and Artificial Intelligence. Beyond his podcasting endeavors, he shares his insights and expertise through FranksWorld.com, a blog that serves as a testament to his dedication to the tech community. Always ahead of the curve, Frank engages with audiences through regular livestreams on LinkedIn, covering cutting-edge technological topics from quantum computing to the burgeoning metaverse.

As a principal technologist for AI at Red Hat with over 30 years of experience, Robbie works to support enterprise AI adoption through open source innovation. His focus is on cloud-native technologies, Kubernetes, and AI platforms, helping to deliver scalable and secure solutions using Red Hat AI.

Robbie is deeply committed to open source, open source AI, and open data, believing in the power of transparency, collaboration, and inclusivity to advance technology in meaningful ways. His work involves exploring private generative AI, traditional machine learning, and enhancing platform capabilities to support open and hybrid cloud solutions for AI. His focus is on helping organizations adopt ethical and sustainable AI technologies that make a real impact.

UI_Icon-Red_Hat-Close-A-Black-RGB

チャンネル別に見る

automation icon

自動化

テクノロジー、チームおよび環境に関する IT 自動化の最新情報

AI icon

AI (人工知能)

お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート

open hybrid cloud icon

オープン・ハイブリッドクラウド

ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。

security icon

セキュリティ

環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報

edge icon

エッジコンピューティング

エッジでの運用を単純化するプラットフォームのアップデート

Infrastructure icon

インフラストラクチャ

世界有数のエンタープライズ向け Linux プラットフォームの最新情報

application development icon

アプリケーション

アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細

Virtualization icon

仮想化

オンプレミスまたは複数クラウドでのワークロードに対応するエンタープライズ仮想化の将来についてご覧ください