Abstract
An attack-defense confrontation problem arises from a robot swarm attacking a territory protected by another one. In denied environments, global positioning and communication are hardly available. It becomes difficult for a swarm to realize collaboration and handle confrontation against another. Commonly used deep reinforcement learning (DRL) relies on pretraining, which is time consuming and has strong environmental dependence, especially in denied environments. To study attack strategies in denied environments, this work proposes a novel evolutionary algorithm (EA)-based attack strategy with Swarm Robots for the first time. Each robot obtains its situation information by perceiving its nearby peers and enemies. Such information is utilized to evaluate the benefits or threats of a robot's next perceptible attack positions. Then, each robot uses EA to optimize its fitness function and searches for its optimal position. A collision-avoidance strategy is integrated into the algorithm. Hence, a robot swarm realizes collaboration and handles confrontation as long as each robot can sense its surroundings. They utilize their own sensors to detect others locally without using global positioning and communication devices. The experimental result analyses show that the EA-based attack strategy has better scalability and more potential in solving large-scale confrontational problems than the DRL-based algorithms. Rationales of the proposed method are presented to show the great capability of the proposed method.
Original language | English (US) |
---|---|
Pages (from-to) | 1562-1574 |
Number of pages | 13 |
Journal | IEEE Transactions on Evolutionary Computation |
Volume | 27 |
Issue number | 6 |
DOIs | |
State | Published - Dec 1 2023 |
All Science Journal Classification (ASJC) codes
- Software
- Theoretical Computer Science
- Computational Theory and Mathematics
Keywords
- Attack-defense confrontation
- attack strategy
- denied environment
- evolutionary algorithm (EA)
- robot swarm