Abstract
Multiagent deep reinforcement learning (MADRL) has been recently applied in many fields, including industry 5.0, but it is sensitive to adversarial attacks. Although adversarial attacks can be detrimental, they are crucial for testing and assisting in enhancing the robustness of models. Existing attacks on MADRL-based models are not sufficient since these attacks involve fixed perturbed agents, without taking into account cases where perturbed agents change. In this article, we present a novel adversarial attack framework. In this framework, we define critical agents that change over time, i.e., when they are perturbed a little, the whole multiagent system is perturbed greatly. Then, we identify critical agents through their worst-case joint actions. In this identifying process, we use gradient information, differential evolution, and SARSA to deal with the challenge caused by changes in the perturbed agents and to compute the worst-case joint actions. After identifying them, we use the target attack method to perturb them. We apply our method to attack the models trained by two state-of-the-art MADRL algorithms under three environments, including two industry-related ones. The experimental results demonstrate our method has a stronger perturbing ability than the existing methods.
Original language | English (US) |
---|---|
Pages (from-to) | 7633-7646 |
Number of pages | 14 |
Journal | IEEE Transactions on Systems, Man, and Cybernetics: Systems |
Volume | 54 |
Issue number | 12 |
DOIs | |
State | Published - 2024 |
All Science Journal Classification (ASJC) codes
- Software
- Control and Systems Engineering
- Human-Computer Interaction
- Computer Science Applications
- Electrical and Electronic Engineering
Keywords
- Adversarial attacks
- continuous action space
- industry 5.0
- multiagent deep reinforcement learning (MADRL)