What is explainable AI?

Copier l'URL

Explainable AI (XAI) is a set of techniques applied during the machine learning (ML) lifecycle, with the goal of making AI outputs more understandable and transparent to humans. Ideally, XAI answers questions like:

  • Why did the model do that?
  • Why not something else?
  • When was the model successful?
  • When did the model fail?
  • When can I trust the model’s output?
  • How can I correct an error?

Explainable AI should be able to demonstrate its competencies and understandings; explain its past actions, ongoing processes and upcoming steps; and cite any relevant information on which its actions are based. In short, explainable AI encourages AI systems to “show their work.”

Explore Red Hat AI

Businesses are increasingly relying on AI systems to make decisions. For example, in healthcare, AI may be used for image analysis or medical diagnosis. In financial services, AI may be used to approve loans and automate investments.

These decisions affect and can pose risks to people, environments, and systems. Transparency and accountability are essential to creating trust between humans and AI systems. Meanwhile, a lack of understanding can lead to confusion, errors, and sometimes legal consequences.

By prioritizing transparency and explainability, you can build AI that’s not only technically advanced, but also safe, fair, and aligned with human values ​​and needs. 

Interpretability vs. explainability

In the context of XAI, explainability and interpretability are often used interchangeably, which can lead to confusion. 

Interpretability refers to the degree to which a human can understand a model’s internal logic. Interpretability pertains to the state of a model and exists on a spectrum. A model with high interpretability has features that are inherently understandable—meaning a non-expert can comprehend the relationship between inputs and outputs. A model with low interpretability has inner workings that are too complex for a human to understand.

Explainability describes the process of generating a justification or explanation. Explainability is achieved through a set of techniques (XAI techniques) applied to a complex model in order to reveal how and why it made a specific decision. When a model’s logic is too complex to interpret firsthand, XAI techniques can help you better understand why the model behaved the way it did. 

When high interpretability provides sufficient transparency, external explainability is generally not needed. Low interpretability—a lack of inherent transparency—creates a need for external explainability to establish trust and understanding in a model.

4 principes clés à prendre en compte pour mettre en œuvre des technologies d'IA

Blog

Artificial Intelligence (AI)

See how our platforms free customers to run AI workloads and models anywhere

Déployez l'IA avec Red Hat : bénéficiez de notre expertise, de nos formations et de notre assistance

Découvrez notre gamme unique de solutions d'IA. Red Hat AI peut vous aider à atteindre vos objectifs métier et informatiques grâce à l'intelligence artificielle.

En savoir plus

Les petits modèles de langage, qu'est-ce que c'est ?

Version réduite d'un grand modèle de langage (LLM), le petit modèle de langage (SLM) repose sur des connaissances plus spécialisées et offre aux équipes une personnalisation plus rapide ainsi qu'une efficacité d'exécution accrue.

Les modèles Granite, qu'est-ce que c'est ?

Les modèles Granite d'IBM correspondent à de grands modèles de langage créés pour les applications d'entreprise. Ils peuvent prendre en charge les cas d'utilisation de l'intelligence artificielle générative qui reposent sur un langage spécifique et du code.

L'inférence distribuée, qu'est-ce que c'est ?

L'inférence distribuée est une approche qui permet aux modèles d'IA de traiter les charges de travail plus efficacement en répartissant les tâches liées à l'inférence entre plusieurs équipements interconnectés.

IA/ML : ressources recommandées

Produit recommandé

  • Red Hat AI

    Des solutions flexibles qui accélèrent le développement et le déploiement de solutions d'IA dans les environnements de cloud hybride.

Articles associés