PrivORL: Differentially Private Synthetic Dataset for Offline Reinforcement Learning
About
Recently, offline reinforcement learning (RL) has become a popular RL paradigm. In offline RL, data providers share pre-collected datasets -- either as individual transitions or sequences of transitions forming trajectories -- to enable the training of RL models (also called agents) without direct interaction with the environments. Offline RL saves interactions with environments compared to traditional RL, and has been effective in critical areas, such as navigation tasks. Meanwhile, concerns about privacy leakage from offline RL datasets have emerged. To safeguard private information in offline RL datasets, we propose the first differential privacy (DP) offline dataset synthesis method, PrivORL, which leverages a diffusion model and diffusion transformer to synthesize transitions and trajectories, respectively, under DP. The synthetic dataset can then be securely released for downstream analysis and research. PrivORL adopts the popular approach of pre-training a synthesizer on public datasets, and then fine-tuning on sensitive datasets using DP Stochastic Gradient Descent (DP-SGD). Additionally, PrivORL introduces curiosity-driven pre-training, which uses feedback from the curiosity module to diversify the synthetic dataset and thus can generate diverse synthetic transitions and trajectories that closely resemble the sensitive dataset. Extensive experiments on five sensitive offline RL datasets show that our method achieves better utility and fidelity in both DP transition and trajectory synthesis compared to baselines. The replication package is available at the GitHub repository.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline Reinforcement Learning | Kitchen Partial | Normalized Score25.5 | 62 | |
| Offline Reinforcement Learning | Maze2D medium | Normalized Return90.7 | 38 | |
| Offline Reinforcement Learning | Maze2D umaze | Normalized Return70.3 | 38 | |
| Offline Reinforcement Learning | MuJoCo HalfCheetah | Normalized Return48.8 | 33 | |
| Offline Reinforcement Learning | Maze2D large | Normalized Return81 | 33 | |
| Offline Reinforcement Learning | Maze2D umaze v1 | Normalized Return52.2 | 18 | |
| Offline Reinforcement Learning | Maze2D medium v1 | Normalized Return49.3 | 18 | |
| Offline Reinforcement Learning | Maze2D large v1 | Normalized Return37.7 | 18 | |
| Offline Reinforcement Learning | Kitchen v0 (partial) | Normalized Return13.8 | 18 | |
| Transition Synthesis | Maze2D umaze | Marginal94.8 | 5 |