causaLens logocausaLens logo alternate

Explainable AI (XAI)

If AI is to meet basic business-use, legal and ethical needs, humans must be able to comprehend how it works. Unlike conventional AI systems, Causal AI is intrinsically explainable and human-compatible. causaLens leverages Causal AI for the enterprise, putting the “cause” in “because”.

What is Explainable AI?

Explainable AI (“XAI” for short) is AI humans can understand.

XAI sometimes encompasses important questions about algorithmic bias, fairness and discrimination. Find out how Causal AI addresses algorithmic bias here.


Why is Explainable AI important?

While AI can generate enormous benefits, black box models lead to project failures and place businesses at needless risk. 

Businesses need explainable AI to harness the technology’s true potential. Explainability enables a wide range of business stakeholders to audit, trust, de-bias, improve, gain insight from and partner with AI systems.

To illustrate why explainability matters, take as a simple example an AI system used by a bank to vet mortgage applications. There are a wide range of stakeholders that have an interest in this model being explainable.

  • 1. Sam applies for a mortgage

  • 2. Sam’s application is inputted into an AI

  • 3. AI estimates Sam's probability of defaulting

  • 4. Sam’s mortgage application is rejected

Two kinds of XAI

There are two distinct approaches to explainability, each with very different technical and commercial implications.


Post hoc (“after the fact”) approaches attempt to make black box models comprehensible, often by approximating them with a second model.

2 Models — a black box model and an approximation algorithm — erodes trust

Explanations based on an approximation model that is not 100% reliable, resulting in false confidence, fragility and fairwashing.

Explanations only after model building

You only find out whether your model is safe, fair and compliant, after you’ve invested resources in building it.

Static system only

Post hoc explanations only explain how models have already behaved in previously encountered circumstances. When the world radically changes (think COVID-19) and your model breaks, you only find out after the fact.

Not suited to non-technical stakeholders

Known to be unusable by most stakeholders, including end users and regulators.

Intrinsically explainable models are already transparent and don’t require further explanation.

1 Model reinforces trust

Explanations are guaranteed to be faithful to your model, promoting trust and transparency.

Explanations before model building

Domain experts can select features to include or exclude before model building, helping reduce algo risk.

Dynamic systems

You can guarantee to your stakeholders, including regulators and risk teams, how the model will behave in all circumstances, including ones that haven’t happened yet.


Non-technical users can easily interpret and interface with models.

Causal AI: True XAI

There are clear advantages to intrinsically explainable AI. However not all intrinsically explainable models are equally good. Causal AI has several critical advantages compared to vanilla interpretable machine learning models.


Intrinsically explainable machine learning models include linear models and decision trees.

Explainability/accuracy tradeoff

Intrinsic explainability comes at the expense of model flexibility and performance in conventional machine learning.

Minimal human-machine interaction

Users are limited to feature selection.

Predictions only

Explanations are ultimately correlation-based. This means they can be used only for predictions, and can be seriously misleading if naively applied to business decision-making.

Limited actionable information

Explanations don’t provide easily actionable information — if you are unhappy with a prediction you need to fit an entirely new model. This may change the model in unexpected ways, requiring you to generate new explanations and reiterate.

Causal AI builds intrinsically explainable models that improve on standard models across several critical dimensions.

Explainability/accuracy synergy

More realistic causal models, that describe the world more accurately, also produce better explanations.

Symbiotic human-machine partnership

Domain experts can fully interact with the model specifying causal relationships between variables and manipulating causal graphs with complete freedom. Models can be interrogated and probed by users, to simulate business experiments, address why-questions, and explore “what if” scenarios.

Beyond predictions

Causal graphs are also used for predictions. But beyond this, causal explanations contain information about the expected consequences of taking various business actions as well as diagnosing why certain events happened.

Actionable insights

Models can be easily manipulated on the basis of explanations, without re-running the data science pipeline. Models gracefully evolve, with new models retaining everything that was good about old versions.

Causal explainability complements model performance

There’s a tradeoff between prediction accuracy and explainability in conventional machine learning that doesn’t apply to Causal AI. Our technology produces models that are more accurate and that everyone can understand. Mobilize explainable AI in your organization.

Demo Explainable AI

Explainable AI doesn't explain enough

Learn more about how and why Causal AI generates a superior kind of explainability.

Read the blog