ProxiMix: Enhancing Fairness with Proximity Samples in Subgroups

Jingyu Hu, Jun Hong, Mengnan Du, Weiru Liu

Research output: Contribution to journalConference articlepeer-review

Abstract

Many bias mitigation methods have been developed for addressing fairness issues in machine learning. We have found that using linear mixup alone, a data augmentation technique, for bias mitigation, can still retain biases present in dataset labels. Research presented in this paper aims to address this issue by proposing a novel pre-processing strategy in which both an existing mixup method and our new bias mitigation algorithm can be utilized to improve the generation of labels of augmented samples, hence being proximity aware. Specifically, we propose ProxiMix which keeps both pairwise and proximity relationships for fairer data augmentation. We have conducted thorough experiments with three datasets, three ML models, and different hyperparameters settings. Our experimental results show the effectiveness of ProxiMix from both fairness of predictions and fairness of recourse perspectives.

Original languageEnglish (US)
JournalCEUR Workshop Proceedings
Volume3808
StatePublished - 2024
Event2nd Workshop on Fairness and Bias in AI, AEQUITAS 2024 - Santiago de Compostela, Spain
Duration: Oct 20 2024 → …

All Science Journal Classification (ASJC) codes

  • General Computer Science

Keywords

  • Bias Mitigations
  • Data Augmentation
  • Group Fairness
  • Mixup

Fingerprint

Dive into the research topics of 'ProxiMix: Enhancing Fairness with Proximity Samples in Subgroups'. Together they form a unique fingerprint.

Cite this