Abstract
Exoskeletons have enormous potential to improve human locomotive performance1–3. However, their development and broad dissemination are limited by the requirement for lengthy human tests and handcrafted control laws2. Here we show an experiment-free method to learn a versatile control policy in simulation. Our learning-in-simulation framework leverages dynamics-aware musculoskeletal and exoskeleton models and data-driven reinforcement learning to bridge the gap between simulation and reality without human experiments. The learned controller is deployed on a custom hip exoskeleton that automatically generates assistance across different activities with reduced metabolic rates by 24.3%, 13.1% and 15.4% for walking, running and stair climbing, respectively. Our framework may offer a generalizable and scalable strategy for the rapid development and widespread adoption of a variety of assistive robots for both able-bodied and mobility-impaired individuals.
Original language | English (US) |
---|---|
Pages (from-to) | 353-359 |
Number of pages | 7 |
Journal | Nature |
Volume | 630 |
Issue number | 8016 |
DOIs | |
State | Published - Jun 13 2024 |
All Science Journal Classification (ASJC) codes
- General