Model reusability in Reinforcement Learning

Sepideh Nikookar, Sohrab Namazi Nia, Senjuti Basu Roy, Sihem Amer-Yahia, Behrooz Omidvar-Tehrani

Research output: Contribution to journalArticlepeer-review

Abstract

The ability to reuse trained models in Reinforcement Learning (RL) holds substantial practical value in particular for complex tasks. While model reusability is widely studied for supervised models in data management, to the best of our knowledge, this is the first ever principled study that is proposed for RL. To capture trained policies, we develop a framework based on an expressive and lossless graph data model that accommodates Temporal Difference Learning and Deep-RL based RL algorithms. Our framework is able to capture arbitrary reward functions that can be composed at inference time. The framework comes with theoretical guarantees and shows that it yields the same result as policies trained from scratch. We design a parameterized algorithm that strikes a balance between efficiency and quality w.r.t cumulative reward. Our experiments with two common RL tasks (query refinement and robot movement) corroborate our theory and show the effectiveness and efficiency of our algorithms.

Original languageEnglish (US)
Article number41
JournalVLDB Journal
Volume34
Issue number4
DOIs
StatePublished - Jul 2025

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Hardware and Architecture

Keywords

  • Optimization algorithms
  • Reinforcement Learning
  • Reusability of ML models

Fingerprint

Dive into the research topics of 'Model reusability in Reinforcement Learning'. Together they form a unique fingerprint.

Cite this