TY - GEN
T1 - FairDP
T2 - 2025 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2025
AU - Tran, Khang
AU - Fioretto, Ferdinando
AU - Khalil, Issa
AU - Thai, My T.
AU - Phan, Linh Thi Xuan
AU - Phan, Nhat Hai
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - This paper introduces FairDP, a novel training mechanism designed to provide group fairness certification for the trained model's decisions, along with a differential privacy (DP) guarantee to protect training data. The key idea of FairDP is to train models for distinct individual groups independently, add noise to each group's gradient for data privacy protection, and progressively integrate knowledge from group models to formulate a comprehensive model that balances privacy, utility, and fairness in downstream tasks. By doing so, FairDP ensures equal contribution from each group while gaining control over the amount of DP-preserving noise added to each group's contribution. To provide fairness certification, FairDP leverages the DP-preserving noise to statistically quantify and bound fairness metrics. An extensive theoretical and empirical analysis using benchmark datasets validates the efficacy of FairDP and improved trade-offs between model utility, privacy, and fairness compared with existing methods. Our empirical results indicate that FairDP can improve fairness metrics by more than 65% on average while attaining marginal utility drop (less than 4% on average) under a rigorous DP-preservation across benchmark datasets compared with existing baselines.
AB - This paper introduces FairDP, a novel training mechanism designed to provide group fairness certification for the trained model's decisions, along with a differential privacy (DP) guarantee to protect training data. The key idea of FairDP is to train models for distinct individual groups independently, add noise to each group's gradient for data privacy protection, and progressively integrate knowledge from group models to formulate a comprehensive model that balances privacy, utility, and fairness in downstream tasks. By doing so, FairDP ensures equal contribution from each group while gaining control over the amount of DP-preserving noise added to each group's contribution. To provide fairness certification, FairDP leverages the DP-preserving noise to statistically quantify and bound fairness metrics. An extensive theoretical and empirical analysis using benchmark datasets validates the efficacy of FairDP and improved trade-offs between model utility, privacy, and fairness compared with existing methods. Our empirical results indicate that FairDP can improve fairness metrics by more than 65% on average while attaining marginal utility drop (less than 4% on average) under a rigorous DP-preservation across benchmark datasets compared with existing baselines.
KW - differential privacy
KW - fairness
KW - machine learning
UR - http://www.scopus.com/inward/record.url?scp=105007292206&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=105007292206&partnerID=8YFLogxK
U2 - 10.1109/SaTML64287.2025.00058
DO - 10.1109/SaTML64287.2025.00058
M3 - Conference contribution
AN - SCOPUS:105007292206
T3 - Proceedings - 2025 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2025
SP - 956
EP - 976
BT - Proceedings - 2025 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2025
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 9 April 2025 through 11 April 2025
ER -