Reinforcement Learning for Mean-Field Game

Mridul Agarwal, Vaneet Aggarwal, Arnob Ghosh, Nilay Tiwari

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Stochastic games provide a framework for interactions among multiple agents and enable a myriad of applications. In these games, agents decide on actions simultaneously. After taking an action, the state of every agent updates to the next state, and each agent receives a reward. However, finding an equilibrium (if exists) in this game is often difficult when the number of agents becomes large. This paper focuses on finding a mean-field equilibrium (MFE) in an action-coupled stochastic game setting in an episodic framework. It is assumed that an agent can approximate the impact of the other agents’ by the empirical distribution of the mean of the actions. All agents know the action distribution and employ lower-myopic best response dynamics to choose the optimal oblivious strategy. This paper proposes a posterior sampling-based approach for reinforcement learning in the mean-field game, where each agent samples a transition probability from the previous transitions. We show that the policy and action distributions converge to the optimal oblivious strategy and the limiting distribution, respectively, which constitute an MFE.

Original languageEnglish (US)
Article number73
JournalAlgorithms
Volume15
Issue number3
DOIs
StatePublished - Mar 2022
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Numerical Analysis
  • Computational Theory and Mathematics
  • Computational Mathematics

Keywords

  • Equilibrium
  • Mean-field game
  • Reinforcement learning

Fingerprint

Dive into the research topics of 'Reinforcement Learning for Mean-Field Game'. Together they form a unique fingerprint.

Cite this