Towards structured NLP interpretation via graph explainers

Hao Yuan, Fan Yang, Mengnan Du, Shuiwang Ji, Xia Hu

Research output: Contribution to journalLetterpeer-review

2 Scopus citations

Abstract

Natural language processing (NLP) models have been increasingly deployed in real-world applications, and interpretation for textual data has also attracted dramatic attention recently. Most existing methods generate feature importance interpretation, which indicate the contribution of each word towards a specific model prediction. Text data typically possess highly structured characteristics and feature importance explanation cannot fully reveal the rich information contained in text. To bridge this gap, we propose to generate structured interpretations for textual data. Specifically, we pre-process the original text using dependency parsing, which could transform the text from sequences into graphs. Then graph neural networks (GNNs) are utilized to classify the transformed graphs. In particular, we explore two kinds of structured interpretation for pre-trained GNNs: edge-level interpretation and subgraph-level interpretation. Experimental results over three text datasets demonstrate that the structured interpretation can better reveal the structured knowledge encoded in the text. The experimental analysis further indicates that the proposed interpretations can faithfully reflect the decision-making process of the GNN model.

Original languageEnglish (US)
Article numbere58
JournalApplied AI Letters
Volume2
Issue number4
DOIs
StatePublished - Dec 2021
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence

Keywords

  • graph neural network
  • natural language processing
  • structured interpretation

Fingerprint

Dive into the research topics of 'Towards structured NLP interpretation via graph explainers'. Together they form a unique fingerprint.

Cite this