Explaining Explanations

Speaker: Leilani H. Gilpin, Massachusetts Institute of Technology

Abstract

There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems are neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, in this talk, I describe and define the foundational concepts of explainability and show how they can be used to classify existing literature. I discuss why current approaches to explanatory methods, especially for deep neural networks, are insufficient. A review paper on this subject is available on arXiv and as a conference proceeding [1].

[1] L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. Specter and L. Kagal, "Explaining Explanations: An Overview of Interpretability of Machine Learning," 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), Turin, Italy, 2018, pp. 80-89.

The slides can be downloaded here.

Bio

Leilani H. Gilpin is a PhD candidate in Electrical Engineering and Computer Science at MIT, supervised by Prof. Gerald Jay Sussman and funded by the Toyota Research Institute. She works on enabling autonomous vehicles, and other autonomous machines, to explain themselves. More information on her research interests can be seen at [people.csail.mit.edu/lgilpin]. Before MIT, Leilani worked as a research engineer at Palo Alto Research Center (PARC) focusing on anomaly detection in healthcare. Leilani earned a M.S. in Computational Mathematical and Engineering from Stanford University in 2013, and a B.S. in Mathematics (with honors), B.S. in Computer Science (with highest honors), and a music minor from UC San Diego in 2011.