TY - GEN
T1 - Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples
AU - Liu, Guanxiong
AU - Khalil, Issa
AU - Khreishah, Abdallah
N1 - Publisher Copyright:
© 2021 ACM.
PY - 2021/4/26
Y1 - 2021/4/26
N2 - Adversarial examples are among the biggest challenges for machine learning models, especially neural network classifiers. Adversarial examples are inputs manipulated with perturbations insignificant to humans while being able to fool machine learning models. Researchers achieve great progress in utilizing adversarial training as a defense. However, the overwhelming computational cost degrades its applicability, and little has been done to overcome this issue. Single-Step adversarial training methods have been proposed as computationally viable solutions; however, they still fail to defend against iterative adversarial examples. In this work, we first experimentally analyze several different state-of-the-art (SOTA) defenses against adversarial examples. Then, based on observations from experiments, we propose a novel single-step adversarial training method that can defend against both single-step and iterative adversarial examples. Through extensive evaluations, we demonstrate that our proposed method successfully combines the advantages of both single-step (low training overhead) and iterative (high robustness) adversarial training defenses. Compared with ATDA on the CIFAR-10 dataset, for example, our proposed method achieves a 35.67% enhancement in test accuracy and a 19.14% reduction in training time. When compared with methods that use BIM or Madry examples (iterative methods) on the CIFAR-10 dataset, our proposed method saves up to 76.03% in training time, with less than 3.78% degeneration in test accuracy. Finally, our experiments with the ImageNet dataset clearly show the scalability of our approach and its performance advantages over SOTA single-step approaches.
AB - Adversarial examples are among the biggest challenges for machine learning models, especially neural network classifiers. Adversarial examples are inputs manipulated with perturbations insignificant to humans while being able to fool machine learning models. Researchers achieve great progress in utilizing adversarial training as a defense. However, the overwhelming computational cost degrades its applicability, and little has been done to overcome this issue. Single-Step adversarial training methods have been proposed as computationally viable solutions; however, they still fail to defend against iterative adversarial examples. In this work, we first experimentally analyze several different state-of-the-art (SOTA) defenses against adversarial examples. Then, based on observations from experiments, we propose a novel single-step adversarial training method that can defend against both single-step and iterative adversarial examples. Through extensive evaluations, we demonstrate that our proposed method successfully combines the advantages of both single-step (low training overhead) and iterative (high robustness) adversarial training defenses. Compared with ATDA on the CIFAR-10 dataset, for example, our proposed method achieves a 35.67% enhancement in test accuracy and a 19.14% reduction in training time. When compared with methods that use BIM or Madry examples (iterative methods) on the CIFAR-10 dataset, our proposed method saves up to 76.03% in training time, with less than 3.78% degeneration in test accuracy. Finally, our experiments with the ImageNet dataset clearly show the scalability of our approach and its performance advantages over SOTA single-step approaches.
KW - Adversarial example
KW - adversarial training
KW - neural network
UR - http://www.scopus.com/inward/record.url?scp=85104972132&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85104972132&partnerID=8YFLogxK
U2 - 10.1145/3422337.3447841
DO - 10.1145/3422337.3447841
M3 - Conference contribution
AN - SCOPUS:85104972132
T3 - CODASPY 2021 - Proceedings of the 11th ACM Conference on Data and Application Security and Privacy
SP - 17
EP - 27
BT - CODASPY 2021 - Proceedings of the 11th ACM Conference on Data and Application Security and Privacy
PB - Association for Computing Machinery, Inc
T2 - 11th ACM Conference on Data and Application Security and Privacy, CODASPY 2021
Y2 - 26 April 2021 through 28 April 2021
ER -