Abstract
We study the constrained Markov decision processes (CMDPs), in which an agent aims to maximize the expected cumulative reward subject to a constraint on the expected total value of a utility function. Existing approaches have primarily focused on soft constraint violation, which allows compensation across episodes, making it easier to satisfy the constraints. In contrast, we consider a stronger hard constraint violation metric, where only positive constraint violations are accumulated. Our main result is the development of the first model-free, simulator-free algorithm that achieves a sub-linear regret and a sub-linear hard constraint violation simultaneously, even in large-scale systems. In particular, we show that Õ(pd3H4K) regret and Õ(pd3H4K) hard constraint violation bounds can be achieved, where K is the number of episodes, d is the dimension of the feature mapping, H is the length of the episode. Our results are achieved via novel adaptations of the primal-dual LSVI-UCB algorithm, i.e., it searches for the dual variable that balances between regret and constraint violation within every episode, rather than updating it at the end of each episode. This turns out to be crucial for our theoretical guarantees when dealing with hard constraint violations.
Original language | English (US) |
---|---|
Pages (from-to) | 1054-1062 |
Number of pages | 9 |
Journal | Proceedings of Machine Learning Research |
Volume | 238 |
State | Published - 2024 |
Event | 27th International Conference on Artificial Intelligence and Statistics, AISTATS 2024 - Valencia, Spain Duration: May 2 2024 → May 4 2024 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability