De-Anonymization at Scale via Tournament-Style Attribution
About
As LLMs rapidly advance and enter real-world use, their privacy implications are increasingly important. We study an authorship de-anonymization threat: using LLMs to link anonymous documents to their authors, potentially compromising settings such as double-blind peer review. We propose De-Anonymization at Scale (DAS), a large language model-based method for attributing authorship among tens of thousands of candidate texts. DAS uses a sequential progression strategy: it randomly partitions the candidate corpus into fixed-size groups, prompts an LLM to select the text most likely written by the same author as a query text, and iteratively re-queries the surviving candidates to produce a ranked top-k list. To make this practical at scale, DAS adds a dense-retrieval prefilter to shrink the search space and a majority-voting style aggregation over multiple independent runs to improve robustness and ranking precision. Experiments on anonymized review data show DAS can recover same-author texts from pools of tens of thousands with accuracy well above chance, demonstrating a realistic privacy risk for anonymous platforms. On standard authorship benchmarks (Enron emails and blog posts), DAS also improves both accuracy and scalability over prior approaches, highlighting a new LLM-enabled de-anonymization vulnerability.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Authorship Attribution | ICLR anonymous reviewing systems | Rank@50.28 | 2 | |
| author identification | Blog Authorship Corpus | Rank@50.74 | 1 | |
| author identification | Enron Email dataset | Rank@574 | 1 | |
| Authorship De-anonymization | Research Paper dataset arXiv CS.LG 2019-2024 (Real scenario) | Rank@566 | 1 |