Mitigating social biases of pre-trained language models via contrastive self-debiasing with double data augmentation

Yingji Li, Mengnan Du, Rui Song, Xin Wang, Mingchen Sun, Ying Wang

Research output: Contribution to journalArticlepeer-review

Abstract

Pre-trained Language Models (PLMs) have been shown to inherit and even amplify the social biases contained in the training corpus, leading to undesired stereotype in real-world applications. Existing techniques for mitigating the social biases of PLMs mainly rely on data augmentation with manually designed prior knowledge or fine-tuning with abundant external corpora to debias. However, these methods are not only limited by artificial experience, but also consume a lot of resources to access all the parameters of the PLMs and are prone to introduce new external biases when fine-tuning with external corpora. In this paper, we propose a Contrastive Self-Debiasing Model with Double Data Augmentation (named CD3) for mitigating social biases of PLMs. Specifically, CD3 consists of two stages: double data augmentation and contrastive self-debiasing. First, we build on counterfactual data augmentation to perform a secondary augmentation using biased prompts that are automatically searched by maximizing the differences in PLMs' encoding across demographic groups. Double data augmentation further amplifies the biases between sample pairs to break the limitations of previous debiasing models that heavily rely on prior knowledge in data augmentation. We then leverage the augmented data for contrastive learning to train a plug-and-play adapter to mitigate the social biases in PLMs' encoding without tuning the PLMs. Extensive experimental results on BERT, ALBERT, and RoBERTa on several real-world datasets and fairness metrics show that CD3 outperforms baseline models on gender debiasing and race debiasing while retaining the language modeling capabilities of PLMs.

Original languageEnglish (US)
Article number104143
JournalArtificial Intelligence
Volume332
DOIs
StatePublished - Jul 2024
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Language and Linguistics
  • Linguistics and Language
  • Artificial Intelligence

Keywords

  • Contrastive learning
  • Data augmentation
  • Pre-trained language models
  • Prompt learning
  • Social bias

Fingerprint

Dive into the research topics of 'Mitigating social biases of pre-trained language models via contrastive self-debiasing with double data augmentation'. Together they form a unique fingerprint.

Cite this