Explainability for Large Language Models: A Survey

Haiyan Zhao, Hanjie Chen, Fan Yang, Ninghao Liu, Huiqi Deng, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Mengnan Du

Research output: Contribution to journalArticlepeer-review

37 Scopus citations

Abstract

Large language models (LLMs) have demonstrated impressive capabilities in natural language processing. However, their internal mechanisms are still unclear and this lack of transparency poses unwanted risks for downstream applications. Therefore, understanding and explaining these models is crucial for elucidating their behaviors, limitations, and social impacts. In this article, we introduce a taxonomy of explainability techniques and provide a structured overview of methods for explaining Transformer-based language models. We categorize techniques based on the training paradigms of LLMs: traditional fine-tuning-based paradigm and prompting-based paradigm. For each paradigm, we summarize the goals and dominant approaches for generating local explanations of individual predictions and global explanations of overall model knowledge. We also discuss metrics for evaluating generated explanations and discuss how explanations can be leveraged to debug models and improve performance. Lastly, we examine key challenges and emerging opportunities for explanation techniques in the era of LLMs in comparison to conventional deep learning models.

Original languageEnglish (US)
Article number20
JournalACM Transactions on Intelligent Systems and Technology
Volume15
Issue number2
DOIs
StatePublished - Feb 22 2024

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Artificial Intelligence

Keywords

  • Explainability
  • interpretability
  • large language models

Fingerprint

Dive into the research topics of 'Explainability for Large Language Models: A Survey'. Together they form a unique fingerprint.

Cite this