Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Detecting Non-Membership in LLM Training Data via Rank Correlations

About

As large language models (LLMs) are trained on increasingly vast and opaque text corpora, determining which data contributed to training has become essential for copyright enforcement, compliance auditing, and user trust. While prior work focuses on detecting whether a dataset was used in training (membership inference), the complementary problem -- verifying that a dataset was not used -- has received little attention. We address this gap by introducing PRISM, a test that detects dataset-level non-membership using only grey-box access to model logits. Our key insight is that two models that have not seen a dataset exhibit higher rank correlation in their normalized token log probabilities than when one model has been trained on that data. Using this observation, we construct a correlation-based test that detects non-membership. Empirically, PRISM reliably rules out membership in training data across all datasets tested while avoiding false positives, thus offering a framework for verifying that specific datasets were excluded from LLM training.

Pranav Shetty, Mirazul Haque, Zhiqiang Ma, Xiaomo Liu• 2026

Related benchmarks

TaskDatasetResultRank
Membership Inference AttackREDDIT--
14
Non-membership detectionarXiv
P-value1.00e-4
9
Non-membership detectionPubmed
p-value1.00e-4
9
Non-membership detectionHN
p-value1.00e-4
5
Non-membership detectionCC
P-value1.00e-4
5
Non-membership detectionUbuntu
P-Value1.00e-4
5
Non-membership detectionFreelaw
p-value1.00e-4
5
Non-membership detectionENRON
P-Value1.00e-4
5
Dataset level membership detectionHN--
4
Dataset level membership detectionCommonCrawl--
4
Showing 10 of 10 rows

Other info

Follow for update