TY - CHAP
T1 - Unfair Trojan
T2 - Targeted Backdoor Attacks Against Model Fairness
AU - Furth, Nicholas
AU - Khreishah, Abdallah
AU - Liu, Guanxiong
AU - Phan, Hai
AU - Jararweh, Yasser
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025
Y1 - 2025
N2 - Machine learning models have been proven to have the ability to make accurate predictions on complex data tasks such as image and graph data. However, they are becoming increasingly vulnerable to various forms of attacks, such as backdoor and data poisoning attacks that can have adverse effects on model behavior. These attacks become more prevalent and complex in federated learning, where multiple local models contribute to a single global model communicating using only local gradients. Additionally, these models tend to make unfair predictions for certain protected features. Previously published works revolve around solving these issues both individually and jointly, typically by leveraging the model’s loss function to account for fairness or by adding perturbations to the unfair data. However, there has been little study on how the adversary can launch an attack that can control model fairness. This chapter demonstrates a novel and flexible attack, which we call Unfair Trojan, which aims to target model fairness while remaining stealthy. Using this attack, an adversary can have devastating effects against machine learning models, increasing their demographic parity, a key fairness metric, by up to 30%, without causing a significantly decreasing model accuracy. This chapter reveals the vulnerabilities of federated learning systems with regard to fairness and highlights the need for more robust defenses against such attacks. Our findings show the importance of understanding such attacks associated with fairness so that they can be mitigated. By revealing the ability of an adversary to exploit and amplify existing fairness issues, this chapter highlights the need for more comprehensive and proactive strategies to ensure fair predictions in machine learning applications.
AB - Machine learning models have been proven to have the ability to make accurate predictions on complex data tasks such as image and graph data. However, they are becoming increasingly vulnerable to various forms of attacks, such as backdoor and data poisoning attacks that can have adverse effects on model behavior. These attacks become more prevalent and complex in federated learning, where multiple local models contribute to a single global model communicating using only local gradients. Additionally, these models tend to make unfair predictions for certain protected features. Previously published works revolve around solving these issues both individually and jointly, typically by leveraging the model’s loss function to account for fairness or by adding perturbations to the unfair data. However, there has been little study on how the adversary can launch an attack that can control model fairness. This chapter demonstrates a novel and flexible attack, which we call Unfair Trojan, which aims to target model fairness while remaining stealthy. Using this attack, an adversary can have devastating effects against machine learning models, increasing their demographic parity, a key fairness metric, by up to 30%, without causing a significantly decreasing model accuracy. This chapter reveals the vulnerabilities of federated learning systems with regard to fairness and highlights the need for more robust defenses against such attacks. Our findings show the importance of understanding such attacks associated with fairness so that they can be mitigated. By revealing the ability of an adversary to exploit and amplify existing fairness issues, this chapter highlights the need for more comprehensive and proactive strategies to ensure fair predictions in machine learning applications.
UR - http://www.scopus.com/inward/record.url?scp=85203253379&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85203253379&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-58923-2_5
DO - 10.1007/978-3-031-58923-2_5
M3 - Chapter
AN - SCOPUS:85203253379
T3 - Springer Optimization and Its Applications
SP - 149
EP - 168
BT - Springer Optimization and Its Applications
PB - Springer
ER -