Learning Crowd Motion Dynamics with Crowds

Bilas Talukdar, Yunhao Zhang, Tomer Weiss

Research output: Contribution to journalArticlepeer-review

Abstract

Reinforcement Learning (RL) has become a popular framework for learning desired behaviors for computational agents in graphics and games. In a multi-agent crowd, one major goal is for agents to avoid collisions while navigating in a dynamic environment. Another goal is to simulate natural-looking crowds, which is difficult to define due to the ambiguity as to what is a natural crowd motion. We introduce a novel methodology for simulating crowds, which learns most-preferred crowd simulation behaviors from crowd-sourced votes via Bayesian optimization. Our method uses deep reinforcement learning for simulating crowds, where crowdsourcing is used to select policy hyper-parameters. Training agents with such parameters results in a crowd simulation that is preferred to users. We demonstrate our method's robustness in multiple scenarios and metrics, where we show it is superior compared to alternate policies and prior work.

Original languageEnglish (US)
Article number9
JournalProceedings of the ACM on Computer Graphics and Interactive Techniques
Volume7
Issue number1
DOIs
StatePublished - May 13 2024

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Computer Graphics and Computer-Aided Design

Keywords

  • Collision Avoidance
  • Policy Optimization
  • Position Based Dynamics

Fingerprint

Dive into the research topics of 'Learning Crowd Motion Dynamics with Crowds'. Together they form a unique fingerprint.

Cite this