Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Differential Privacy for Text Analytics via Natural Text Sanitization

About

Texts convey sophisticated knowledge. However, texts also convey sensitive information. Despite the success of general-purpose language models and domain-specific mechanisms with differential privacy (DP), existing text sanitization mechanisms still provide low utility, as cursed by the high-dimensional text representation. The companion issue of utilizing sanitized texts for downstream analytics is also under-explored. This paper takes a direct approach to text sanitization. Our insight is to consider both sensitivity and similarity via our new local DP notion. The sanitized texts also contribute to our sanitization-aware pretraining and fine-tuning, enabling privacy-preserving natural language processing over the BERT language model with promising utility. Surprisingly, the high utility does not boost up the success rate of inference attacks.

Xiang Yue, Minxin Du, Tianhao Wang, Yaliang Li, Huan Sun, Sherman S. M. Chow• 2021

Related benchmarks

TaskDatasetResultRank
Sentiment ClassificationSST2 (test)
Accuracy79.58
214
Text ClassificationSST-2
Accuracy74.46
121
Natural Language InferenceQNLI
Accuracy76.36
42
Semantic Textual SimilarityMedSTS
Pearson Correlation0.5423
17
Query AttackSST-2 (test)
Query Count (she)4
11
Showing 5 of 5 rows

Other info

Follow for update