Abstract
Multiagent deep reinforcement learning (DRL) makes optimal decisions dependent on system states observed by agents, but any uncertainty on the observations may mislead agents to take wrong actions. The mean-field actor-critic (MFAC) reinforcement learning is well-known in the multiagent field since it can effectively handle a scalability problem. However, it is sensitive to state perturbations that can significantly degrade the team rewards. This work proposes a Robust MFAC (RoMFAC) reinforcement learning that has two innovations: 1) a new objective function of training actors, composed of a policy gradient function that is related to the expected cumulative discount reward on sampled clean states and an action loss function that represents the difference between actions taken on clean and adversarial states and 2) a repetitive regularization of the action loss, ensuring the trained actors to obtain excellent performance. Furthermore, this work proposes a game model named a state-adversarial stochastic game (SASG). Despite the Nash equilibrium of SASG may not exist, adversarial perturbations to states in the RoMFAC are proven to be defensible based on SASG. Experimental results show that RoMFAC is robust against adversarial perturbations while maintaining its competitive performance in environments without perturbations.
Original language | English (US) |
---|---|
Pages (from-to) | 1-12 |
Number of pages | 12 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
DOIs | |
State | Accepted/In press - 2023 |
All Science Journal Classification (ASJC) codes
- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence
Keywords
- Games
- Markov processes
- Mean-field actor-critic (MFAC) reinforcement learning
- Perturbation methods
- Reinforcement learning
- Robustness
- Scalability
- Training
- multiagent systems
- robustness
- state-adversarial stochastic game (SASG)