Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Model-Based Transfer Learning for Contextual Reinforcement Learning

About

Deep reinforcement learning (RL) is a powerful approach to complex decision making. However, one issue that limits its practical application is its brittleness, sometimes failing to train in the presence of small changes in the environment. Motivated by the success of zero-shot transfer-where pre-trained models perform well on related tasks-we consider the problem of selecting a good set of training tasks to maximize generalization performance across a range of tasks. Given the high cost of training, it is critical to select training tasks strategically, but not well understood how to do so. We hence introduce Model-Based Transfer Learning (MBTL), which layers on top of existing RL methods to effectively solve contextual RL problems. MBTL models the generalization performance in two parts: 1) the performance set point, modeled using Gaussian processes, and 2) performance loss (generalization gap), modeled as a linear function of contextual similarity. MBTL combines these two pieces of information within a Bayesian optimization (BO) framework to strategically select training tasks. We show theoretically that the method exhibits sublinear regret in the number of training tasks and discuss conditions to further tighten regret bounds. We experimentally validate our methods using urban traffic and standard continuous control benchmarks. The experimental results suggest that MBTL can achieve up to 43x improved sample efficiency compared with canonical independent training and multi-task training. Further experiments demonstrate the efficacy of BO and the insensitivity to the underlying RL algorithm and hyperparameters. This work lays the foundations for investigating explicit modeling of generalization, thereby enabling principled yet effective methods for contextual RL.

Jung-Hoon Cho, Vindula Jayawardana, Sirui Li, Cathy Wu• 2024

Related benchmarks

TaskDatasetResultRank
Advisory autonomyAdvisory Autonomy Single lane ring (Acceleration guidance)
Normalized Reward93.29
6
Advisory autonomyAdvisory Autonomy Highway ramp (Speed guidance)
Normalized Reward74.26
6
Dynamic eco-drivingEco-Driving Penetration Rate variation
Normalized Reward0.6519
6
Dynamic eco-drivingEco-Driving Inflow variation
Normalized Reward0.5356
6
Dynamic eco-drivingEco-Driving Green Phase variation
Normalized Reward0.4932
6
Traffic Signal ControlTraffic Signal Inflow variation
Normalized Reward0.8729
6
Advisory autonomyAdvisory Autonomy Single lane ring (Speed guidance)
Normalized Reward0.982
6
Traffic Signal ControlTraffic Signal Road Length variation
Normalized Reward0.9409
6
Advisory autonomyAdvisory Autonomy Highway ramp (Acceleration guidance)
Normalized Reward0.6282
6
Traffic Signal ControlTraffic Signal Speed Limit variation
Normalized Reward88.66
6
Showing 10 of 10 rows

Other info

Follow for update