PDR: A Plug-and-Play Positional Decay Framework for LLM Pre-training Data Detection
About
Detecting pre-training data in Large Language Models (LLMs) is crucial for auditing data privacy and copyright compliance, yet it remains challenging in black-box, zero-shot settings where computational resources and training data are scarce. While existing likelihood-based methods have shown promise, they typically aggregate token-level scores using uniform weights, thereby neglecting the inherent information-theoretic dynamics of autoregressive generation. In this paper, we hypothesize and empirically validate that memorization signals are heavily skewed towards the high-entropy initial tokens, where model uncertainty is highest, and decay as context accumulates. To leverage this linguistic property, we introduce Positional Decay Reweighting (PDR), a training-free and plug-and-play framework. PDR explicitly reweights token-level scores to amplify distinct signals from early positions while suppressing noise from later ones. Extensive experiments show that PDR acts as a robust prior and can usually enhance a wide range of advanced methods across multiple benchmarks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Pre-training Data Detection | WikiMIA Original Length 32 | AUROC85.9 | 21 | |
| Membership Inference Attack | MIMIR | Wiki Success Rate55.5 | 9 | |
| Pre-training Data Detection | WikiMIA Paraphrased Length 32 | AUROC70.9 | 2 | |
| Pre-training Data Detection | WikiMIA Original Length 64 | AUROC74.8 | 2 | |
| Pre-training Data Detection | WikiMIA Paraphrased Length 64 | AUROC0.706 | 2 | |
| Pre-training Data Detection | WikiMIA Original Length 128 | AUROC75.9 | 2 | |
| Pre-training Data Detection | WikiMIA Paraphrased Length 128 | AUROC73.3 | 2 |