Abstract
In this paper, we introduce a novel interpreting framework that learns an interpretable model based on an ontology-based sampling technique to explain agnostic prediction models. Different from existing approaches, our interpretable algorithm considers contextual correlation among words, described in domain knowledge ontologies, to generate semantic explanations. To narrow down the search space for explanations, which is exponentially large given long and complicated text data, we design a learnable anchor algorithm to better extract local and domain knowledge-oriented explanations. A set of regulations is further introduced, combining learned interpretable representations with anchors and information extraction to generate comprehensible semantic explanations. To carry out an extensive experiment, we first develop a drug abuse ontology (DAO) on a drug abuse dataset on the Twittersphere, and a consumer complaint ontology (ConsO) on a consumer complaint dataset, especially for interpretable ML. Our experimental results show that our approach generates more precise and more insightful explanations compared with a variety of baseline approaches.
Original language | English (US) |
---|---|
Pages (from-to) | 770-793 |
Number of pages | 24 |
Journal | Journal of Combinatorial Optimization |
Volume | 44 |
Issue number | 1 |
DOIs | |
State | Published - Aug 2022 |
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Discrete Mathematics and Combinatorics
- Control and Optimization
- Computational Theory and Mathematics
- Applied Mathematics
Keywords
- Anchor
- Information extraction
- Interpretable machine learning
- Ontology