Boosting Fair Classifier Generalization through Adaptive Priority Reweighing

Zhihao Hu, Yiran Xu, Mengnan Du, Jindong Gu, Xinmei Tian, Fengxiang He

Research output: Contribution to journalArticlepeer-review

Abstract

With the increasing penetration of machine learning applications in critical decision-making areas, calls for algorithmic fairness are more prominent. Although there have been various modalities to improve algorithmic fairness through learning with fairness constraints, their performance does not generalize well in the test set. A performance-promising fair algorithm with better generalizability is needed. This article proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability. Most previous reweighing methods propose to assign a unified weight for each (sub)group. Rather, our method granularly models the distance from the sample predictions to the decision boundary. Our adaptive reweighing method prioritizes samples closer to the decision boundary and assigns a higher weight to improve the generalizability of fair classifiers. Extensive experiments are performed to validate the generalizability of our adaptive priority reweighing method for accuracy and fairness measures (i.e., equal opportunity, equalized odds, and demographic parity) in tabular benchmarks. We also highlight the performance of our method in improving the fairness of language and vision models. The code is available at https://github.com/che2198/APW.

Original languageEnglish (US)
Article number40
JournalACM Transactions on Knowledge Discovery from Data
Volume19
Issue number2
DOIs
StatePublished - Feb 15 2025

All Science Journal Classification (ASJC) codes

  • General Computer Science

Keywords

  • Algorithmic fairness
  • reweighing method
  • trustworthy AI

Fingerprint

Dive into the research topics of 'Boosting Fair Classifier Generalization through Adaptive Priority Reweighing'. Together they form a unique fingerprint.

Cite this