Value Approximator-based Learning Model Predictive Control for Iterative Tasks

Han Qiu Bao, Qi Kang, Xu Dong Shi, Meng Chu Zhou, Jing An, Yusuf Al-Turki

Research output: Contribution to journalArticlepeer-review

Abstract

Maximizing the performance of a system without reference over an infinite horizon is a challenging problem for iterative control tasks. This paper introduces a Value Approximator-based Learning Model Predictive Control (VALMPC) framework that aims to enhance the system's performance by learning from previous trajectories. We introduce a value approximator to recursively reconstruct a terminal cost function and reformulate an infinite time optimization problem to a finite time one. This work proposes a novel controller design approach, and shows its recursive feasibility and stability. Moreover, the convergence of closed-loop trajectory and the optimality of steady trajectory as iterations proceed to the infinity are proven for general nonlinear systems. Simulation and comparison results show the lower storage requirement of the proposed control method than two state-of-the-art methods. Its resulting trajectory is validated to achieve the optimality.

Original languageEnglish (US)
Pages (from-to)1-8
Number of pages8
JournalIEEE Transactions on Automatic Control
DOIs
StateAccepted/In press - 2024

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Keywords

  • Iteration control
  • learning
  • nonlinear systems
  • value approximator
  • vehicle control

Fingerprint

Dive into the research topics of 'Value Approximator-based Learning Model Predictive Control for Iterative Tasks'. Together they form a unique fingerprint.

Cite this