TY - GEN
T1 - Quantifying the Vulnerability of Anomaly Detection Implementations to Nondeterminism-based Attacks
AU - Ahmed, Muyeed
AU - Neamtiu, Iulian
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Anomaly Detection (AD) is widely used in security applications such as intrusion detection, but its vulnerability to nondeterminism attacks has not been noticed, and its robustness against such attacks has not been studied. Nondeterminism, i.e., output variation on the same input dataset, is a common trait of AD implementations. We show that nondeterminism can be exploited by an attacker that tries to have a malicious input point (outlier) classified as benign input (inlier). In our threat model, the attacker has extremely limited capabilities - they can only retry the attack; they cannot influence the model, manipulate the AD/IDS implementation, or insert noise. We focus on three concrete, orthogonal attack scenarios: (1) a restart attack that exploits a simple re-run, (2) a resource attack that exploits the use of less computationally-expensive parameter settings, and (3) an inconsistency attack that exploits the differences between toolkits implementing the same algorithm. We quantify attack vulnerability in popular implementations of four AD algorithms - IF, RobCov, LOF, and OCSVM - and offer mitigation strategies. We show that in each scenario, despite attackers' limited capabilities, attacks have a high likelihood of success.
AB - Anomaly Detection (AD) is widely used in security applications such as intrusion detection, but its vulnerability to nondeterminism attacks has not been noticed, and its robustness against such attacks has not been studied. Nondeterminism, i.e., output variation on the same input dataset, is a common trait of AD implementations. We show that nondeterminism can be exploited by an attacker that tries to have a malicious input point (outlier) classified as benign input (inlier). In our threat model, the attacker has extremely limited capabilities - they can only retry the attack; they cannot influence the model, manipulate the AD/IDS implementation, or insert noise. We focus on three concrete, orthogonal attack scenarios: (1) a restart attack that exploits a simple re-run, (2) a resource attack that exploits the use of less computationally-expensive parameter settings, and (3) an inconsistency attack that exploits the differences between toolkits implementing the same algorithm. We quantify attack vulnerability in popular implementations of four AD algorithms - IF, RobCov, LOF, and OCSVM - and offer mitigation strategies. We show that in each scenario, despite attackers' limited capabilities, attacks have a high likelihood of success.
KW - Adversarial ML
KW - Anomaly Detection
KW - Program Nondeterminism
UR - http://www.scopus.com/inward/record.url?scp=85206479273&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85206479273&partnerID=8YFLogxK
U2 - 10.1109/AITest62860.2024.00013
DO - 10.1109/AITest62860.2024.00013
M3 - Conference contribution
AN - SCOPUS:85206479273
T3 - Proceedings - 6th IEEE International Conference on Artificial Intelligence Testing, AITest 2024
SP - 37
EP - 46
BT - Proceedings - 6th IEEE International Conference on Artificial Intelligence Testing, AITest 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 6th IEEE International Conference on Artificial Intelligence Testing, AITest 2024
Y2 - 15 July 2024 through 18 July 2024
ER -