Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PDR: A Plug-and-Play Positional Decay Framework for LLM Pre-training Data Detection

About

Detecting pre-training data in Large Language Models (LLMs) is crucial for auditing data privacy and copyright compliance, yet it remains challenging in black-box, zero-shot settings where computational resources and training data are scarce. While existing likelihood-based methods have shown promise, they typically aggregate token-level scores using uniform weights, thereby neglecting the inherent information-theoretic dynamics of autoregressive generation. In this paper, we hypothesize and empirically validate that memorization signals are heavily skewed towards the high-entropy initial tokens, where model uncertainty is highest, and decay as context accumulates. To leverage this linguistic property, we introduce Positional Decay Reweighting (PDR), a training-free and plug-and-play framework. PDR explicitly reweights token-level scores to amplify distinct signals from early positions while suppressing noise from later ones. Extensive experiments show that PDR acts as a robust prior and can usually enhance a wide range of advanced methods across multiple benchmarks.

Jinhan Liu, Yibo Yang, Ruiying Lu, Piotr Piekos, Yimeng Chen, Peng Wang, Dandan Guo• 2026

Related benchmarks

TaskDatasetResultRank
Pre-training Data DetectionWikiMIA Original Length 32
AUROC85.9
21
Membership Inference AttackMIMIR
Wiki Success Rate55.5
9
Pre-training Data DetectionWikiMIA Paraphrased Length 32
AUROC70.9
2
Pre-training Data DetectionWikiMIA Original Length 64
AUROC74.8
2
Pre-training Data DetectionWikiMIA Paraphrased Length 64
AUROC0.706
2
Pre-training Data DetectionWikiMIA Original Length 128
AUROC75.9
2
Pre-training Data DetectionWikiMIA Paraphrased Length 128
AUROC73.3
2
Showing 7 of 7 rows

Other info

Follow for update