Explainable AI

If AI is to meet basic business-use, legal and ethical needs, it must be explainable. Many stakeholders, from regulators to data scientists to end users, need to know how AI models work. Unlike conventional AI systems, Causal AI is intrinsically explainable and human-compatible. causaLens leverages Causal AI for the enterprise, putting the “cause” in “because”.

What is Explainable AI?

Explainable AI (“XAI” for short) is AI humans can understand.

XAI sometimes encompasses important questions about algorithmic bias, fairness and discrimination. Find out how Causal AI addresses algorithmic bias here. 

Many advanced machine learning algorithms are “black boxes” that lack explainability.

screen

Why is Explainable AI important?

While AI can generate enormous benefits, black box models lead to project failures and place businesses at needless risk. 

Businesses need explainable AI to harness the technology’s true potential. Explainability enables a wide range of business stakeholders to audit, trust, de-bias, improve, gain insight from and partner with AI systems.

To illustrate why explainability matters, take as a simple example an AI system used by a bank to vet mortgage applications. There are a wide range of stakeholders that have an interest in this model being explainable.

STAKEHOLDERS
1

Sam applies for a mortgage

screen
2

Sam’s application is inputed into an AI

screen
3

AI estimates Sams’ probability of defaulting

screen
4

Sam’s mortgage application is rejected

screen

Two kinds of explainability

There are two distinct approaches to explainability, each with very different technical and commercial implications.

POST HOC EXPLAINABILITY
Input Output

Post hoc (“after the fact”) approaches attempt to make black box models comprehensible, often by aproximating them with a second model.

2 Models - a black box model and an ‘approximation algorithm’ erodes trust

Explanations based on an approximation model that is not 100% reliable, resulting in false confidence, fragility and fairwashing.

Explanations only after model building

You only find out whether your model is safe, fair and compliant, after you’ve invested resources in building it.

Static system only

Post hoc explanations only explain how models have already behaved in previously encountered circumstances. When the world radically changes (think COVID-19) and your model breaks, you only find out after the fact.

Not suited to non-technical stakeholders

Known to be unusable by most stakeholders, including end users and regulators.
INTRINSIC EXPLAINABILITY
Input Output

Intrinsically explainable models are already transparent and don’t require further explanation.

1 Model reinforces trust

Explanations are guaranteed to be faithful to your model, promoting trust and transparency.

Explanations before model building

Domain experts can select features to include or exclude before model building, helping reduce algo risk.

Dynamic systems

You can guarantee to your stakeholders, including regulators and risk teams, how the model will behave in all circumstances, including ones that haven’t happened yet.

Human-friendly

Non-technical users can easily interpret and interface with models.

Causal AI: True XAI

There are clear advantages to intrinsically explainable AI. However not all intrinsically explainable models are equally good. Causal AI has several critical advantages compared to vanilla interpretable machine learning models.

CONVENTIONAL INTRINSIC EXPLAINABILITY
Input Output

Intrinsically explainable machine learning models include linear models and decision trees.

Explainability/accuracy tradeoff

Intrinsic explainability comes at the expense of model flexibility and performance in conventional machine learning.

Minimal human-machine interaction

Users are limited to feature selection.

Predictions only

Explanations are ultimately correlation-based. This means they can be used only for predictions, and can be seriously misleading if naively applied to business decision-making.

Limited actionable information

Explanations don’t provide easily actionable information — if you are unhappy with a prediction you need to fit an entirely new model. This may change the model in unexpected ways, requiring you to generate new explanations and reiterate.
CAUSAL AI
Input Output

Causal AI builds intrinsically explainable models that improve on standard models across several critical dimensions.

Explainability/accuracy synergy

More realistic causal models, that describe the world more accurately, also produce better explanations.

Symbiotic human-machine partnership

In addition, domain experts can fully interact with the model specifying causal relationships between variables and manipulating causal graphs with complete freedom. Models can be interrogated and probed by users, to simulate business experiments, address why-questions, and explore “what if” scenarios.

Beyond predictions

Causal graphs are also used for predictions. But beyond this, causal explanations contain information about the expected consequences of taking various business actions as well as diagnosing why certain events happened.

Actionable insights

Models can be easily manipulated on the basis of explanations, without re-running the data science pipeline. Models gracefully evolve, with new models retaining everything that was good about old versions.

Causal explainability complements model performance

There’s a tradeoff between prediction accuracy and explainability in conventional machine learning that doesn’t apply to Causal AI

Effortless next-generation explainable AI

The causaLens dashboard produces causal diagrams behind its forecasts. Business stakeholders can intuitively understand these diagrams. They can also easily share knowledge with the AI, and partner with it to explore hypothetical scenarios and design business interventions. Here we show a lending model, with a specifically tailored module for evaluating model fairness. We have dashboards tailored to many areas of business, banking and finance.