Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Data-Driven Offline Decision-Making via Invariant Representation Learning

About

The goal in offline data-driven decision-making is synthesize decisions that optimize a black-box utility function, using a previously-collected static dataset, with no active interaction. These problems appear in many forms: offline reinforcement learning (RL), where we must produce actions that optimize the long-term reward, bandits from logged data, where the goal is to determine the correct arm, and offline model-based optimization (MBO) problems, where we must find the optimal design provided access to only a static dataset. A key challenge in all these settings is distributional shift: when we optimize with respect to the input into a model trained from offline data, it is easy to produce an out-of-distribution (OOD) input that appears erroneously good. In contrast to prior approaches that utilize pessimism or conservatism to tackle this problem, in this paper, we formulate offline data-driven decision-making as domain adaptation, where the goal is to make accurate predictions for the value of optimized decisions ("target domain"), when training only on the dataset ("source domain"). This perspective leads to invariant objective models (IOM), our approach for addressing distributional shift by enforcing invariance between the learned representations of the training dataset and optimized decisions. In IOM, if the optimized decisions are too different from the training dataset, the representation will be forced to lose much of the information that distinguishes good designs from bad ones, making all choices seem mediocre. Critically, when the optimizer is aware of this representational tradeoff, it should choose not to stray too far from the training distribution, leading to a natural trade-off between distributional shift and learning performance.

Han Qi, Yi Su, Aviral Kumar, Sergey Levine• 2022

Related benchmarks

TaskDatasetResultRank
Offline Model-Based OptimizationHopper Controller Design-Bench
Score (100th Pctl)2.444
15
Offline Model-Based OptimizationAnt Morphology Design-Bench
100th Percentile Score0.977
15
Offline Model-Based OptimizationD'Kitty Morphology Design-Bench
100th Percentile Score94.9
15
Offline Model-Based OptimizationSuperconductor Design-Bench
Score (P100)51.1
14
Showing 4 of 4 rows

Other info

Code

Follow for update