Abstract
The ability to reuse trained models in Reinforcement Learning (RL) holds substantial practical value in particular for complex tasks. While model reusability is widely studied for supervised models in data management, to the best of our knowledge, this is the first ever principled study that is proposed for RL. To capture trained policies, we develop a framework based on an expressive and lossless graph data model that accommodates Temporal Difference Learning and Deep-RL based RL algorithms. Our framework is able to capture arbitrary reward functions that can be composed at inference time. The framework comes with theoretical guarantees and shows that it yields the same result as policies trained from scratch. We design a parameterized algorithm that strikes a balance between efficiency and quality w.r.t cumulative reward. Our experiments with two common RL tasks (query refinement and robot movement) corroborate our theory and show the effectiveness and efficiency of our algorithms.
Original language | English (US) |
---|---|
Article number | 41 |
Journal | VLDB Journal |
Volume | 34 |
Issue number | 4 |
DOIs | |
State | Published - Jul 2025 |
All Science Journal Classification (ASJC) codes
- Information Systems
- Hardware and Architecture
Keywords
- Optimization algorithms
- Reinforcement Learning
- Reusability of ML models