What is explainable AI?

Copiar URL

Explainable AI (XAI) is a set of techniques applied during the machine learning (ML) lifecycle, with the goal of making AI outputs more understandable and transparent to humans. Ideally, XAI answers questions like:

  • Why did the model do that?
  • Why not something else?
  • When was the model successful?
  • When did the model fail?
  • When can I trust the model’s output?
  • How can I correct an error?

Explainable AI should be able to demonstrate its competencies and understandings; explain its past actions, ongoing processes and upcoming steps; and cite any relevant information on which its actions are based. In short, explainable AI encourages AI systems to “show their work.”

Explore Red Hat AI

Businesses are increasingly relying on AI systems to make decisions. For example, in healthcare, AI may be used for image analysis or medical diagnosis. In financial services, AI may be used to approve loans and automate investments.

These decisions affect and can pose risks to people, environments, and systems. Transparency and accountability are essential to creating trust between humans and AI systems. Meanwhile, a lack of understanding can lead to confusion, errors, and sometimes legal consequences.

By prioritizing transparency and explainability, you can build AI that’s not only technically advanced, but also safe, fair, and aligned with human values ​​and needs. 

Interpretability vs. explainability

In the context of XAI, explainability and interpretability are often used interchangeably, which can lead to confusion. 

Interpretability refers to the degree to which a human can understand a model’s internal logic. Interpretability pertains to the state of a model and exists on a spectrum. A model with high interpretability has features that are inherently understandable—meaning a non-expert can comprehend the relationship between inputs and outputs. A model with low interpretability has inner workings that are too complex for a human to understand.

Explainability describes the process of generating a justification or explanation. Explainability is achieved through a set of techniques (XAI techniques) applied to a complex model in order to reveal how and why it made a specific decision. When a model’s logic is too complex to interpret firsthand, XAI techniques can help you better understand why the model behaved the way it did. 

When high interpretability provides sufficient transparency, external explainability is generally not needed. Low interpretability—a lack of inherent transparency—creates a need for external explainability to establish trust and understanding in a model.

Quatro considerações importantes sobre a implementação da tecnologia de IA

Blog

Artificial Intelligence (AI)

See how our platforms free customers to run AI workloads and models anywhere

Aproveite a IA com a Red Hat: expertise, treinamento e suporte em todos os estágios da jornada de IA

Conheça o portfólio exclusivo de IA da Red Hat. O Red Hat AI ajuda você a atingir seus objetivos empresariais e de TI com a inteligência artificial (IA).

Leia mais

O que é inferência distribuída?

A inferência distribuída permite que modelos de IA processem cargas de trabalho de forma mais eficiente, dividindo a execução da inferência entre um conjunto de dispositivos interconectados.

O que é Model Context Protocol (MCP)?

Descubra como o Model Context Protocol (MCP) conecta aplicações de IA a fontes de dados externas para ajudar a criar fluxos de trabalho mais inteligentes.

Tudo sobre a AIOps

AIOps, sigla em inglês que significa IA para operações de TI, é uma abordagem que usa machine learning e outras técnicas avançadas de IA para automatizar operações de TI.

Inteligência artificial: conteúdo adicional

Produto em destaque

  • Red Hat AI

    Soluções flexíveis que aceleram o desenvolvimento e a implantação de aplicações de IA em ambientes de nuvem híbrida.