Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DP-RFT: Learning to Generate Synthetic Text via Differentially Private Reinforcement Fine-Tuning

About

Differentially private (DP) synthetic data generation plays a pivotal role in developing large language models (LLMs) on private data, where data owners cannot provide eyes-on access to individual examples. Generating DP synthetic data typically involves a difficult trade-off. On one hand, DP finetuning methods train an LLM as a synthetic data generator with formal privacy guarantees, yet it still requires the raw content of private examples for model training. However, methods that avoid direct exposure to private data are bounded by an off-the-shelf, un-finetuned model, whose outputs often lack domain fidelity. Can we train an LLM to generate high-quality synthetic text without eyes-on access to individual private examples? In this work, we introduce Differentially Private Reinforcement Fine-Tuning (DP-RFT), an online reinforcement learning algorithm for synthetic data generation with LLMs. DP-RFT leverages DP-protected nearest-neighbor votes from an eyes-off private corpus as a reward signal for on-policy synthetic samples generated by an LLM. The LLM iteratively learns to generate synthetic data to maximize the expected DP votes through Proximal Policy Optimization (PPO). We evaluate DP-RFT for long-form and domain-specific synthetic data generation, such as news articles, meeting transcripts, and medical article abstracts. Our experiments show that DP-RFT closes the gap between private evolution and DP finetuning methods in terms of the fidelity and downstream utility of the generated synthetic data, while respecting the private data boundary.

Fangyuan Xu, Sihao Chen, Zinan Lin, Taiwei Shi, Sydney Graham, Pei Zhou, Mengting Wan, Alex Stein, Virginia Estellers, Charles Chen, Morris Sharp, Richard Speyer, Tadas Baltrusaitis, Jennifer Neville, Eunsol Choi, Longqi Yang• 2026

Related benchmarks

TaskDatasetResultRank
Next-token predictionPubmed
Next Token Accuracy38.31
32
Next-token predictionBBC
Next Token Accuracy33.92
32
Next-token predictionWildChat
Next Token Accuracy45.42
32
Next-token predictionQMSum
Next Token Accuracy34.33
32
Synthetic Text GenerationWildChat
Mean Embedding Similarity0.31
10
Synthetic Text GenerationPubmed
Mean Embedding Similarity0.52
10
Synthetic Text GenerationBBC
Mean Embedding Similarity0.35
10
Synthetic Text GenerationQMSum
Mean Embedding Similarity38
10
Next-token predictionBBC
Accuracy (ϵ=∞)13.72
5
Next-token predictionWildChat
BERT-Small Next Token Accuracy (eps=inf)13.93
5
Showing 10 of 11 rows

Other info

Follow for update