TY - JOUR
T1 - Investigating the Factors Impacting Adversarial Attack and Defense Performances in Federated Learning
AU - Aljaafari, Nura
AU - Nazzal, Mahmoud
AU - Sawalmeh, Ahmad H.
AU - Khreishah, Abdallah
AU - Anan, Muhammad
AU - Algosaibi, Abdulelah
AU - Alnaeem, Mohammed Abdulaziz
AU - Aldalbahi, Adel
AU - Alhumam, Abdulaziz
AU - Vizcarra, Conrado P.
N1 - Publisher Copyright:
© 1988-2012 IEEE.
PY - 2024
Y1 - 2024
N2 - Despite the promising success of federated learning in various application areas, its inherent vulnerability to adversarial attacks hinders its applicability in security-critical areas. This calls for developing viable defense measures against such attacks. A prerequisite for this development, however, is the understanding of what creates, promotes, and aggravates this vulnerability. To date, developing this understanding remains an outstanding gap in the literature. Accordingly, this paper presents an attempt at developing such an understanding. Primarily, this is achieved from two main perspectives. The first perspective concerns addressing the factors, elements, and parameters contributing to the vulnerability of federated learning models to adversarial attacks, their degrees of severity, and combined effects. This includes addressing diverse operating conditions, attack types and scenarios, and collaborations between attacking agents. The second perspective regards analyzing the appearance of the adversarial property of a model in how it updates its coefficients and exploiting this for defense purposes. These analyses are conducted through extensive experiments on image and text classification tasks. Simulation results reveal the importance of specific parameters and factors on the severity of this vulnerability. Besides, the proposed defense strategy is shown able to provide promising performances.
AB - Despite the promising success of federated learning in various application areas, its inherent vulnerability to adversarial attacks hinders its applicability in security-critical areas. This calls for developing viable defense measures against such attacks. A prerequisite for this development, however, is the understanding of what creates, promotes, and aggravates this vulnerability. To date, developing this understanding remains an outstanding gap in the literature. Accordingly, this paper presents an attempt at developing such an understanding. Primarily, this is achieved from two main perspectives. The first perspective concerns addressing the factors, elements, and parameters contributing to the vulnerability of federated learning models to adversarial attacks, their degrees of severity, and combined effects. This includes addressing diverse operating conditions, attack types and scenarios, and collaborations between attacking agents. The second perspective regards analyzing the appearance of the adversarial property of a model in how it updates its coefficients and exploiting this for defense purposes. These analyses are conducted through extensive experiments on image and text classification tasks. Simulation results reveal the importance of specific parameters and factors on the severity of this vulnerability. Besides, the proposed defense strategy is shown able to provide promising performances.
KW - Adversarial attacks
KW - adversarial defense
KW - federated learning
KW - machine learning security
UR - http://www.scopus.com/inward/record.url?scp=85130497453&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85130497453&partnerID=8YFLogxK
U2 - 10.1109/TEM.2022.3155353
DO - 10.1109/TEM.2022.3155353
M3 - Article
AN - SCOPUS:85130497453
SN - 0018-9391
VL - 71
SP - 12542
EP - 12555
JO - IEEE Transactions on Engineering Management
JF - IEEE Transactions on Engineering Management
ER -