Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Randomized Masked Finetuning: An Efficient Way to Mitigate Memorization of PIIs in LLMs

About

The current literature on memorization in Natural Language Models, especially Large Language Models (LLMs), poses severe security and privacy risks, as models tend to memorize personally identifying information (PIIs) from training data. We introduce Randomized Masked Fine-Tuning (RMFT), a novel privacy-preserving fine-tuning technique that reduces PII memorization while minimizing performance impact. Using the Enron Email Dataset, we demonstrate that RMFT achieves an 80.81% reduction in Total Extraction Rate and 80.17% reduction in Seen Extraction Rate compared to baseline fine-tuning, outperforming deduplication methods while maintaining only a 5.73% increase in perplexity. We present MaxTER, a Pareto-optimal evaluation framework for assessing privacy-utility tradeoffs, and show the performance of RMFT vs Deduplication by Area Under The Response Curve (AURC) metric.

Kunj Joshi, David A. Smith• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL6.798
1541
PII Extraction MitigationENRON
TER0.049
3
PII Mitigation and Language ModelingEnron (test)
Avg PPL6.798
3
PII Mitigation and Language ModelingWikiText2 (test)
Avg PPL585.4
3
PII Mitigation and Language ModelingWeb crawl prompt dataset (test)
Avg PPL2.21e+3
3
Showing 5 of 5 rows

Other info

Follow for update