Real estate listing platform Zillow has been making headlines for the wrong reasons — $10B of market capitalization wiped out and one in four of its staff let go.
What went wrong? Zillow tried venturing into house flipping, using machine learning algorithms to value properties to buy, but the algorithms were broken.
We hate to say to Zillow “we told you so”, but we told you so. The horror story could have been averted with Causal AI.
4 Problems with ML the Zillow Episode Illustrates
The Zillow debacle is a textbook case study in the shortcomings of the current incarnation of AI, correlation-based machine learning (ML). We highlight four core problems:
- ML breaks when markets shift
You might have thought that it would be difficult to lose nearly $80k per house by trading property during a raging bull market. However machine learning algorithms break when markets radically change, because they are based on the assumption that past correlations are indicative of the future. “Zillow algos failed to take into account the recent slowdown in home price appreciation — even as price gains cooled, Zillow kept buying more homes and paying more for them”, the FT report.
- ML explanations are unhelpful
Zillow were using standard “explainable AI” tools which offer “the best of both worlds between black box and intuitive explainable models”, as one senior executive at Zillow put it. In reality their black box model was producing inaccurate predictions and the explanatory model was revealing this only after it was too late.
- ML can’t handle small data
Real estate investing is a small data problem — datasets often have only quarterly or annual entries. Current ML fails with small data because algorithms of the sort Zillow were using have large numbers of features whose weights are radically underdetermined by small data.
Don’t be a Zillow — Go Causal!
Causal AI is the only AI that real estate investors can trust. View our webinar co-hosted with one of the world’s largest real estate investors to learn more.
“Transparency and explainability of AI models requires an understanding of causality — an inherent advantage of the causaLens platform”Head of Data at Nuveen Real Estate
- Causal AI identifies causal signals that are robust across different market contexts, and don’t overfit to misleading trends.
- Causal discovery algorithms are able to learn from extremely small datasets, by picking out critical information through limited observations, like humans can.
- Causal explainability allows investors to guarantee the behaviour of their models before letting them loose in production.
- Causal AI goes beyond predictions to directly make decisions that take account of broader business context and risk.
Beyond real estate, reach out to find out why Causal AI is the only form of trustworthy AI for all industries.