TY - GEN
T1 - Fair Domain Generalization with Heterogeneous Sensitive Attributes Across Domains
AU - Palakkadavath, Ragja
AU - Le, Hung
AU - Nguyen-Tang, Thanh
AU - Gupta, Sunil
AU - Venkatesh, Svetha
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Domain generalization(DG) techniques classify data from unseen domains by leveraging data from multiple source domains. Most methods in DG focus on improving predictive performance in the unseen domain. Recent studies have started to enhance fairness measures in the unseen domain. However, these studies assume that every domain has the same, single sensitive attribute, including the unseen domain. In practice, each domain may be required to satisfy fairness on its own set of sensitive attributes. Given a set of sensitive attributes (S), current methods need to train 2n models to ensure fairness on any subset of S where n=|S|. We propose a single-model solution to address this new problem setting. We learn two feature representations, one to generalize the model's predictive performance, and another to generalize the model's fairness. The first representation is made invariant across domains to generalize predictive performance. The second representation is kept selectively invariant, i.e., invariant only across domains having the same sensitive attributes. Our single model exhibits superior predictive performance and fairness measures against the current alternative of 2n models on unseen domains on multiple real-world datasets. Our code is available at https://github.com/ragjapk/SISA.
AB - Domain generalization(DG) techniques classify data from unseen domains by leveraging data from multiple source domains. Most methods in DG focus on improving predictive performance in the unseen domain. Recent studies have started to enhance fairness measures in the unseen domain. However, these studies assume that every domain has the same, single sensitive attribute, including the unseen domain. In practice, each domain may be required to satisfy fairness on its own set of sensitive attributes. Given a set of sensitive attributes (S), current methods need to train 2n models to ensure fairness on any subset of S where n=|S|. We propose a single-model solution to address this new problem setting. We learn two feature representations, one to generalize the model's predictive performance, and another to generalize the model's fairness. The first representation is made invariant across domains to generalize predictive performance. The second representation is kept selectively invariant, i.e., invariant only across domains having the same sensitive attributes. Our single model exhibits superior predictive performance and fairness measures against the current alternative of 2n models on unseen domains on multiple real-world datasets. Our code is available at https://github.com/ragjapk/SISA.
KW - algorithmic fairness
KW - covariate shift
KW - domain generalization
KW - multiple sensitive attributes
UR - https://www.scopus.com/pages/publications/105003632149
UR - https://www.scopus.com/pages/publications/105003632149#tab=citedBy
U2 - 10.1109/WACV61041.2025.00718
DO - 10.1109/WACV61041.2025.00718
M3 - Conference contribution
AN - SCOPUS:105003632149
T3 - Proceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025
SP - 7389
EP - 7398
BT - Proceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2025 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2025
Y2 - 28 February 2025 through 4 March 2025
ER -