Adversarial Attacks on Multiagent Deep Reinforcement Learning Models in Continuous Action Space

Ziyuan Zhou, Guanjun Liu, Weiran Guo, Meng Chu Zhou

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Multiagent deep reinforcement learning (MADRL) has been recently applied in many fields, including industry 5.0, but it is sensitive to adversarial attacks. Although adversarial attacks can be detrimental, they are crucial for testing and assisting in enhancing the robustness of models. Existing attacks on MADRL-based models are not sufficient since these attacks involve fixed perturbed agents, without taking into account cases where perturbed agents change. In this article, we present a novel adversarial attack framework. In this framework, we define critical agents that change over time, i.e., when they are perturbed a little, the whole multiagent system is perturbed greatly. Then, we identify critical agents through their worst-case joint actions. In this identifying process, we use gradient information, differential evolution, and SARSA to deal with the challenge caused by changes in the perturbed agents and to compute the worst-case joint actions. After identifying them, we use the target attack method to perturb them. We apply our method to attack the models trained by two state-of-the-art MADRL algorithms under three environments, including two industry-related ones. The experimental results demonstrate our method has a stronger perturbing ability than the existing methods.

Original languageEnglish (US)
Pages (from-to)7633-7646
Number of pages14
JournalIEEE Transactions on Systems, Man, and Cybernetics: Systems
Volume54
Issue number12
DOIs
StatePublished - 2024

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Human-Computer Interaction
  • Computer Science Applications
  • Electrical and Electronic Engineering

Keywords

  • Adversarial attacks
  • continuous action space
  • industry 5.0
  • multiagent deep reinforcement learning (MADRL)

Fingerprint

Dive into the research topics of 'Adversarial Attacks on Multiagent Deep Reinforcement Learning Models in Continuous Action Space'. Together they form a unique fingerprint.

Cite this