What is explainable AI?
Explainable AI (XAI) is a set of techniques applied during the machine learning (ML) lifecycle, with the goal of making AI outputs more understandable and transparent to humans. Ideally, XAI answers questions like:
- Why did the model do that?
- Why not something else?
- When was the model successful?
- When did the model fail?
- When can I trust the model’s output?
- How can I correct an error?
Explainable AI should be able to demonstrate its competencies and understandings; explain its past actions, ongoing processes and upcoming steps; and cite any relevant information on which its actions are based. In short, explainable AI encourages AI systems to “show their work.”
What is the purpose of explainable AI?
Businesses are increasingly relying on AI systems to make decisions. For example, in healthcare, AI may be used for image analysis or medical diagnosis. In financial services, AI may be used to approve loans and automate investments.
These decisions affect and can pose risks to people, environments, and systems. Transparency and accountability are essential to creating trust between humans and AI systems. Meanwhile, a lack of understanding can lead to confusion, errors, and sometimes legal consequences.
By prioritizing transparency and explainability, you can build AI that’s not only technically advanced, but also safe, fair, and aligned with human values and needs.
Interpretability vs. explainability
In the context of XAI, explainability and interpretability are often used interchangeably, which can lead to confusion.
Interpretability refers to the degree to which a human can understand a model’s internal logic. Interpretability pertains to the state of a model and exists on a spectrum. A model with high interpretability has features that are inherently understandable—meaning a non-expert can comprehend the relationship between inputs and outputs. A model with low interpretability has inner workings that are too complex for a human to understand.
Explainability describes the process of generating a justification or explanation. Explainability is achieved through a set of techniques (XAI techniques) applied to a complex model in order to reveal how and why it made a specific decision. When a model’s logic is too complex to interpret firsthand, XAI techniques can help you better understand why the model behaved the way it did.
When high interpretability provides sufficient transparency, external explainability is generally not needed. Low interpretability—a lack of inherent transparency—creates a need for external explainability to establish trust and understanding in a model.
Quatro considerações importantes sobre a implementação da tecnologia de IA
Artificial Intelligence (AI)
See how our platforms free customers to run AI workloads and models anywhere