DP-RFT: Learning to Generate Synthetic Text via Differentially Private Reinforcement Fine-Tuning
About
Differentially private (DP) synthetic data generation plays a pivotal role in developing large language models (LLMs) on private data, where data owners cannot provide eyes-on access to individual examples. Generating DP synthetic data typically involves a difficult trade-off. On one hand, DP finetuning methods train an LLM as a synthetic data generator with formal privacy guarantees, yet it still requires the raw content of private examples for model training. However, methods that avoid direct exposure to private data are bounded by an off-the-shelf, un-finetuned model, whose outputs often lack domain fidelity. Can we train an LLM to generate high-quality synthetic text without eyes-on access to individual private examples? In this work, we introduce Differentially Private Reinforcement Fine-Tuning (DP-RFT), an online reinforcement learning algorithm for synthetic data generation with LLMs. DP-RFT leverages DP-protected nearest-neighbor votes from an eyes-off private corpus as a reward signal for on-policy synthetic samples generated by an LLM. The LLM iteratively learns to generate synthetic data to maximize the expected DP votes through Proximal Policy Optimization (PPO). We evaluate DP-RFT for long-form and domain-specific synthetic data generation, such as news articles, meeting transcripts, and medical article abstracts. Our experiments show that DP-RFT closes the gap between private evolution and DP finetuning methods in terms of the fidelity and downstream utility of the generated synthetic data, while respecting the private data boundary.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Next-token prediction | Pubmed | Next Token Accuracy38.31 | 32 | |
| Next-token prediction | BBC | Next Token Accuracy33.92 | 32 | |
| Next-token prediction | WildChat | Next Token Accuracy45.42 | 32 | |
| Next-token prediction | QMSum | Next Token Accuracy34.33 | 32 | |
| Synthetic Text Generation | WildChat | Mean Embedding Similarity0.31 | 10 | |
| Synthetic Text Generation | Pubmed | Mean Embedding Similarity0.52 | 10 | |
| Synthetic Text Generation | BBC | Mean Embedding Similarity0.35 | 10 | |
| Synthetic Text Generation | QMSum | Mean Embedding Similarity38 | 10 | |
| Next-token prediction | BBC | Accuracy (ϵ=∞)13.72 | 5 | |
| Next-token prediction | WildChat | BERT-Small Next Token Accuracy (eps=inf)13.93 | 5 |