Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PrivORL: Differentially Private Synthetic Dataset for Offline Reinforcement Learning

About

Recently, offline reinforcement learning (RL) has become a popular RL paradigm. In offline RL, data providers share pre-collected datasets -- either as individual transitions or sequences of transitions forming trajectories -- to enable the training of RL models (also called agents) without direct interaction with the environments. Offline RL saves interactions with environments compared to traditional RL, and has been effective in critical areas, such as navigation tasks. Meanwhile, concerns about privacy leakage from offline RL datasets have emerged. To safeguard private information in offline RL datasets, we propose the first differential privacy (DP) offline dataset synthesis method, PrivORL, which leverages a diffusion model and diffusion transformer to synthesize transitions and trajectories, respectively, under DP. The synthetic dataset can then be securely released for downstream analysis and research. PrivORL adopts the popular approach of pre-training a synthesizer on public datasets, and then fine-tuning on sensitive datasets using DP Stochastic Gradient Descent (DP-SGD). Additionally, PrivORL introduces curiosity-driven pre-training, which uses feedback from the curiosity module to diversify the synthetic dataset and thus can generate diverse synthetic transitions and trajectories that closely resemble the sensitive dataset. Extensive experiments on five sensitive offline RL datasets show that our method achieves better utility and fidelity in both DP transition and trajectory synthesis compared to baselines. The replication package is available at the GitHub repository.

Chen Gong, Zheng Liu, Kecen Li, Tianhao Wang• 2025

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningKitchen Partial
Normalized Score25.5
62
Offline Reinforcement LearningMaze2D medium
Normalized Return90.7
38
Offline Reinforcement LearningMaze2D umaze
Normalized Return70.3
38
Offline Reinforcement LearningMuJoCo HalfCheetah
Normalized Return48.8
33
Offline Reinforcement LearningMaze2D large
Normalized Return81
33
Offline Reinforcement LearningMaze2D umaze v1
Normalized Return52.2
18
Offline Reinforcement LearningMaze2D medium v1
Normalized Return49.3
18
Offline Reinforcement LearningMaze2D large v1
Normalized Return37.7
18
Offline Reinforcement LearningKitchen v0 (partial)
Normalized Return13.8
18
Transition SynthesisMaze2D umaze
Marginal94.8
5
Showing 10 of 17 rows

Other info

Follow for update