TY - GEN
T1 - Fairness via Group Contribution Matching
AU - Li, Tianlin
AU - Li, Zhiming
AU - Li, Anran
AU - Du, Mengnan
AU - Liu, Aishan
AU - Guo, Qing
AU - Meng, Guozhu
AU - Liu, Yang
N1 - Publisher Copyright:
© 2023 International Joint Conferences on Artificial Intelligence. All rights reserved.
PY - 2023
Y1 - 2023
N2 - Fairness issues in Deep Learning models have recently received increasing attention due to their significant societal impact. Although methods for mitigating unfairness are constantly proposed, little research has been conducted to understand how discrimination and bias develop during the standard training process. In this study, we propose analyzing the contribution of each subgroup (i.e., a group of data with the same sensitive attribute) in the training process to understand the cause of such bias development process. We propose a gradient-based metric to assess training subgroup contribution disparity, showing that unequal contributions from different subgroups are one source of such unfairness. One way to balance the contribution of each subgroup is through oversampling, which ensures that an equal number of samples are drawn from each subgroup during each training iteration. However, we find that even with a balanced number of samples, the contribution of each group remains unequal, resulting in unfairness under such a strategy. To address the above issues, we propose an easy but effective group contribution matching (GCM) method to match the contribution of each subgroup. Our experiments show that our GCM effectively improves fairness and outperforms other methods significantly.
AB - Fairness issues in Deep Learning models have recently received increasing attention due to their significant societal impact. Although methods for mitigating unfairness are constantly proposed, little research has been conducted to understand how discrimination and bias develop during the standard training process. In this study, we propose analyzing the contribution of each subgroup (i.e., a group of data with the same sensitive attribute) in the training process to understand the cause of such bias development process. We propose a gradient-based metric to assess training subgroup contribution disparity, showing that unequal contributions from different subgroups are one source of such unfairness. One way to balance the contribution of each subgroup is through oversampling, which ensures that an equal number of samples are drawn from each subgroup during each training iteration. However, we find that even with a balanced number of samples, the contribution of each group remains unequal, resulting in unfairness under such a strategy. To address the above issues, we propose an easy but effective group contribution matching (GCM) method to match the contribution of each subgroup. Our experiments show that our GCM effectively improves fairness and outperforms other methods significantly.
UR - http://www.scopus.com/inward/record.url?scp=85170355946&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85170355946&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85170355946
T3 - IJCAI International Joint Conference on Artificial Intelligence
SP - 436
EP - 445
BT - Proceedings of the 32nd International Joint Conference on Artificial Intelligence, IJCAI 2023
A2 - Elkind, Edith
PB - International Joint Conferences on Artificial Intelligence
T2 - 32nd International Joint Conference on Artificial Intelligence, IJCAI 2023
Y2 - 19 August 2023 through 25 August 2023
ER -