Interpreting black-box classifiers using instance-level visual explanations

Paolo Tamagnini, Josua Krause, Aritra Dasgupta, Enrico Bertini

Research output: Chapter in Book/Report/Conference proceedingConference contribution

54 Scopus citations

Abstract

To realize the full potential of machine learning in diverse real-world domains, it is necessary for model predictions to be readily interpretable and actionable for the human in the loop. Analysts, who are the users but not the developers of machine learning models, often do not trust a model because of the lack of transparency in associating predictions with the underlying data space. To address this problem, we propose Rivelo, a visual analytics interface that enables analysts to understand the causes behind predictions of binary classifiers by interactively exploring a set of instance-level explanations. These explanations are model-agnostic, treating a model as a black box, and they help analysts in interactively probing the high-dimensional binary data space for detecting features relevant to predictions. We demonstrate the utility of the interface with a case study analyzing a random forest model on the sentiment of Yelp reviews about doctors.

Original languageEnglish (US)
Title of host publicationProceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics, HILDA 2017
PublisherAssociation for Computing Machinery, Inc
ISBN (Electronic)9781450350297
DOIs
StatePublished - May 14 2017
Externally publishedYes
Event2nd Workshop on Human-In-the-Loop Data Analytics, HILDA 2017 - Chicago, United States
Duration: May 14 2017 → …

Publication series

NameProceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics, HILDA 2017

Conference

Conference2nd Workshop on Human-In-the-Loop Data Analytics, HILDA 2017
Country/TerritoryUnited States
CityChicago
Period5/14/17 → …

All Science Journal Classification (ASJC) codes

  • Computational Theory and Mathematics
  • Computer Science Applications
  • Information Systems

Keywords

  • Classification
  • Explanation
  • Machine learning
  • Visual analytics

Fingerprint

Dive into the research topics of 'Interpreting black-box classifiers using instance-level visual explanations'. Together they form a unique fingerprint.

Cite this