Skip to Content

The Causal AI conference is back in San Francisco for 2024, bigger and better than ever.

Register Interest

Explaining the News: Why Capital Markets Need Causal AI

Nearly all players in capital markets believe that AI is critical to their future success. However, AI adoption also poses serious, even existential, risks for enterprises that misapply the technology. To capitalize on the opportunities of AI while mitigating its risks, participants in capital markets need explainable AI (“XAI”). XAI is AI that makes predictions, recommendations, and decisions that humans can audit, understand, explain, and, crucially, trust.

A leading North American Bank worked with causaLens to apply Causal AI to build explainable models that mitigate and manage trading risk. The project focused on generating alpha from news sentiment data — Causal AI enabled significantly greater explainability while also inflecting PnL outcomes for trading strategies.

Why XAI in Capital Markets?

Why XAI in Capital Markets?

We highlight three key reasons why it’s essential that AI systems deployed in capital markets are explainable. 

1. Explain to understand the why behind decisions.

Machine learning (ML) models trained on huge alternative and financial datasets are often riddled with spurious correlations. In statistics, a spurious correlation is a mathematical relationship in which two or more events or variables are associated but not causally related, due to either coincidence or the presence of a certain third, unseen factor. This means they are making predictions based on the wrong reasons. Explanations allow humans to verify the assumptions the model is making and ensure that the model will behave reliably in new contexts. 

In capital markets it’s critical to understand if your trading “signals” are genuine or spurious. As a simple example, the “Super Bowl Indicator” is a patently spurious correlation which holds that if an NFC team wins then we are likely to enter a bull market, whereas an AFC win augurs badly for the stock market.  We would caution against treating the Rams’ (an NFC team) 2022 win as a buy signal. Explanations help investors to identify and eliminate these misleading correlations that undermine ML performance. 

2. Explain to comply with existing and incoming regulations. 

The AI regulatory landscape for financial services is rapidly evolving. Waves of incoming regulations and proposals are demanding greater explainability. 

April 2021 was a watershed for AI regulations. The European Commission inked a far-reaching proposal mandating that high-risk AI systems must be transparent, robust, and overseeable, with fines of up to 4% of annual revenue for non-compliance. Many financial services AI applications are considered “high-risk” under the proposal and require significant and extensive compliance. causaLens has provided expert commentary on these regulations — learn more here

The European Parliament followed up on the Commission’s proposal with recommendations for AI applications in the financial sector. They dial up lack of explainability as a critical “source of lack of trust” for hedge funds and banks, and emphasize the problem of machine learning models “overfitting” and failing “when novel market conditions differ significantly from those observed in the past.”

The EU’s leading-edge proposals are reflected in emerging regulations for financial institutions operating in many other jurisdictions. The UK Financial Conduct Authority’s high-level Principles, argue for “transparency” as a central pillar of AI regulations. Singapore’s financial services regulator “MAS” has transparency as a core aspect of its wide-ranging “FEAT” (“Fairness, Ethics, Accountability, and Transparency”) proposals. 

North America is also moving towards more comprehensive AI regulations. The Biden administration has taken a more proactive stance to AI regulations, bringing them into closer alignment with the EU. A new EU-US Trade and Technology Council includes an “AI act” that will formalize these regulations. Similarly, major law reform agencies in Canada are pushing for closer consistency with the incoming EU regulations. 

More broadly, a Harvard meta-analysis of 36 prominent AI Principles charters, authored by different actors (including governments, major enterprises, and advocacy groups) from across the world, finds that “Explainability principles are present in 94% of documents in the dataset”. This includes the OECDs AI Principles framework, which has 42 governments as signatories. 

Best practice AI governance and expert legal advice recommend that financial institutions ready themselves for the new regulations with timely adoption of explainable AI. 

3. Explain to improve models.

Out of all sectors of industry, COVID-19-related disruption to AI models was most severe in banking and financial services. Explainability allows banks to anticipate extreme events and tail risks, and pre-emptively reparametrize or even shutdown models when markets shift. It enables data science teams to troubleshoot, debug, and upgrade their models to make continuous improvements. 

In short, explanations enable humans to understand, audit, and improve their models, which cultivates trust in AI systems. This element of trust is often the difference between AI systems that drive outsized returns, versus the 89% of models that are terminated during an experimental phase before deployment.

Putting the “Cause” in “Because”

Causal AI is a new category of AI that can discover and reason with cause-effect relationships. Causal AI provides superior explanations as compared to correlation-based XAI techniques. 

Explainability-accuracy trade-off

With conventional machine learning, the accuracy and performance of models must be traded off to achieve explainability. The most performant models are essentially “black boxes” (Figure 1) that humans can’t scrutinize. 

Fig 1. The trade-off between explainability and accuracy that applies to classical machine learning algorithms does not apply to Causal AI.

Correlation-based XAI tools (of which SHAP and LIME are examples) can shed light on what’s happening inside black-box models. But standard XAI explanations may be unrealistic or meaningless due to spurious correlations. Explanations are local, describing the behavior of individual stocks, and may not apply at an aggregated level across multiple stocks. And there is little-to-no scope for human-in-the-loop input to constrain the behavior of models. 

Explainability-accuracy synergy

Causal AI models eliminate spurious correlations and zero in on causal drivers, making them more robust in dynamic, complex capital markets. They are intrinsically explainable and do not require post hoc XAI tools. Human users can inspect the model’s assumptions via intuitive graph visualizations. Causal AI model assumptions can be scrutinized before the model is deployed, and banks can illustrate to stakeholders, including regulators and internal risk teams, how the model is likely to behave in all circumstances — even completely novel conditions. 

The causaLens blog gives a more detailed summary of the advantages of causal explanations over correlation-based XAI techniques. 

Case Study: Trading on Sentiment

A large North American Bank tested Causal AI to increase the performance and explainability of trading models to enhance trading decisions, benefit from decreased regulatory and model risk, and ultimately better serve its clients.

We set out a case study detailing how Causal AI has been applied to build highly accurate sentiment analysis models with superior explainability. 

Aim

Predicting the direction and magnitude of price movements based on news sentiment, to optimally enter and exit trades. 

Sentiment analysis in capital markets

The news, especially news sentiment, provides essential insight into the behavior of stocks. AI gives us the bandwidth to monitor and analyze every single news article about companies within our trading universe. We can understand whether there is an ongoing positive or negative sentiment about each stock, and pinpoint the source of such sentiment. 

Challenges

  • Causal feedback loops. Price movements influence the news, which in turn influences price. There is a non-trivial and complex interplay between news sentiment and more traditional factors, such as momentum. A key challenge is to distinguish between the news sentiment that is an effect of price movements from the sentiment that is causing momentum shifts. 
  • Fickle market dynamics. In some moods, the market pays a lot of attention to sentiment around earnings and analyst rating changes, while in other periods it shifts its attention to ESG information and regulations. Sentiment models need to anticipate which kinds of events the market is focussing on.
  • Needle in a haystack. Sentiment data is extremely rich. A single company, on a single day, may be perceived in the media in hundreds of different ways. Many news articles and social posts report slight variations on prior news stories, creating a vicious cycle of non-information and diluting signals. The challenge is to identify the precise events and stories that are likely to move the market. 

Solution

Covering the entire universe of North American stocks, with over ten-thousand stocks, the Causal AI model continuously analyzes hundreds of thousands of news articles.

Figure 2. An illustration of how news sentiment interacts with price movements, focussing on Nokia as an example. Red lines indicate negative sentiment, green lines indicate positive sentiment. Numbered event types above the plot are RavenPack’s classification of news events into categories. The challenge is to establish which news sentiments are an effect of price movements, and which are driving it. Data source: RavenPack. 

The model comprises an explainable map describing how momentum signals and sentiment signals causally interact with each other (see a simplified version in Fig. 3). 

Figure 3. A simple Causal AI model describing the feedback loops and correlated signals between momentum and sentiment. 

causaLens identified, at an aggregate level across the portfolio, the causal signature of sentiment that both has a causal impact on daily returns and is not already priced in. In other words, the AI distinguishes noisy sentiment that is merely correlated with momentum, from sentiment that is driving price movements. 

Correlation-based machine learning cannot make this determination. Momentum signals and sentiment will typically be strongly correlated with each other. Correlation-based analysis cannot disentangle or control for all these correlated signals, resulting in a very high number of false positives. 

Results

causaLens’ model explanations yielded many insights that have been translated into model improvements. 

  • Causal AI unearthed asymmetries in the reaction of the market to different aspects of sentiment. Positive and negative sentiment differentially impact price movements. Negative sentiment is worse news than positive sentiment is good news, when it comes to stories about earnings, analyst ratings changes, and price target downgrades. However, there are specific events where a positive sentiment leads to a stronger price reaction, for instance insider buying versus selling. 
  • Typically, large price movements in out-of-hours trading and previous days tend to regress to the mean during trading hours — but sentiment, especially media buzz, can predict which out-of-hours price movements are sticky and expect to continue during the trading hours.
  • We find that the market is efficient at pricing in news events. This is seen through the impact of news events diminishing when comparing overnight news with news during trading hours. The latter have the stronger impact on price movements even when considering the overnight events.

More broadly, the Causal AI model demonstrated that news sentiment combined with feature engineering and causal reasoning significantly improves the PnL of a trading strategy measured by the Sharpe ratio, as compared to relying on momentum or price signals alone. 

Next Directions

News is just one driver of capital markets. There are major sources of uncertainty that are occupying the thoughts of investors:

  • Inflation
  • Monetary policy
  • Geopolitical risk
  • Supply chain disruption

The Bank is now reviewing Causal AI modeling for these other factors, to identify early warnings of geopolitical risk, signals of shifting inflation, causal drivers of supply chain and disruption, and to anticipate regime shifts in interest rates.