Implementation of Decentralized Reinforcement Learning-Based Multi-Quadrotor Flocking

Pramod Abichandani, Christian Speck, Donald Bucci, William McIntyre, Deepan Lobo

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

Enabling coordinated motion of multiple quadrotors is an active area of research in the field of small unmanned aerial vehicles (sUAVs). While there are many techniques found in the literature that address the problem, these studies are limited to simulation results and seldom account for wind disturbances. This paper presents the experimental validation of a decentralized planner based on multi-objective reinforcement learning (RL) that achieves waypoint-based flocking (separation, velocity alignment, and cohesion) for multiple quadrotors in the presence of wind gusts. The planner is learned using an object-focused, greatest mass, state-action-reward-state-action (OF-GM-SARSA) approach. The Dryden wind gust model is used to simulate wind gusts during hardware-in-the-loop (HWIL) tests. The hardware and software architecture developed for the multi-quadrotor flocking controller is described in detail. HWIL and outdoor flight tests results show that the trained RL planner can generalize the flocking behaviors learned in training to the real-world flight dynamics of the DJI M100 quadrotor in windy conditions.

Original languageEnglish (US)
Pages (from-to)132491-132507
Number of pages17
JournalIEEE Access
Volume9
DOIs
StatePublished - 2021

All Science Journal Classification (ASJC) codes

  • General Engineering
  • General Computer Science
  • General Materials Science

Keywords

  • Cooperative systems
  • design for experiments
  • motion planning
  • multi-agent systems
  • supervised learning
  • unmanned aerial vehicles

Fingerprint

Dive into the research topics of 'Implementation of Decentralized Reinforcement Learning-Based Multi-Quadrotor Flocking'. Together they form a unique fingerprint.

Cite this