causaLens logocausaLens logo alternate

Explanations for Occluded Images

Our own Hana Chockler’s paper on “Explanations for Occluded Images” sets out how Causal AI produces more trustworthy explanations of image classifiers. The paper, written in collaboration with researchers from Amazon Science, was presented at ICCV, the premier international computer vision conference.

causaLens Principal Investigator Hana Chockler’s research paper “Explanations for Occluded Images” has been accepted to ICCV, the premier computer vision conference. The paper, published in collaboration with researchers from Amazon Science, leverages causal analysis to produce more reliable explanations of image classifiers. In particular, causality-based techniques produce better explanations where the input image depicts a partially occluded object.

You can view Hana presenting the paper at ICCV: