XRAND: Differentially Private Defense against Explanation-Guided Attacks

Truc Nguyen, Phung Lai, Hai Phan, My T. Thai

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations


Recent development in the field of explainable artificial intelligence (XAI) has helped improve trust in Machine-Learning-as-a-Service (MLaaS) systems, in which an explanation is provided together with the model prediction in response to each query. However, XAI also opens a door for adversaries to gain insights into the black-box models in MLaaS, thereby making the models more vulnerable to several attacks. For example, feature-based explanations (e.g., SHAP) could expose the top important features that a black-box model focuses on. Such disclosure has been exploited to craft effective backdoor triggers against malware classifiers. To address this trade-off, we introduce a new concept of achieving local differential privacy (LDP) in the explanations, and from that we establish a defense, called XRAND, against such attacks. We show that our mechanism restricts the information that the adversary can learn about the top important features, while maintaining the faithfulness of the explanations.

Original languageEnglish (US)
Title of host publicationAAAI-23 Technical Tracks 10
EditorsBrian Williams, Yiling Chen, Jennifer Neville
PublisherAAAI press
Number of pages9
ISBN (Electronic)9781577358800
StatePublished - Jun 27 2023
Event37th AAAI Conference on Artificial Intelligence, AAAI 2023 - Washington, United States
Duration: Feb 7 2023Feb 14 2023

Publication series

NameProceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023


Conference37th AAAI Conference on Artificial Intelligence, AAAI 2023
Country/TerritoryUnited States

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence


Dive into the research topics of 'XRAND: Differentially Private Defense against Explanation-Guided Attacks'. Together they form a unique fingerprint.

Cite this