Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding

About

The training data in large language models is key to their success, but it also presents privacy and security risks, as it may contain sensitive information. Detecting pre-training data is crucial for mitigating these concerns. Existing methods typically analyze target text in isolation or solely with non-member contexts, overlooking potential insights from simultaneously considering both member and non-member contexts. While previous work suggested that member contexts provide little information due to the minor distributional shift they induce, our analysis reveals that these subtle shifts can be effectively leveraged when contrasted with non-member contexts. In this paper, we propose Con-ReCall, a novel approach that leverages the asymmetric distributional shifts induced by member and non-member contexts through contrastive decoding, amplifying subtle differences to enhance membership inference. Extensive empirical evaluations demonstrate that Con-ReCall achieves state-of-the-art performance on the WikiMIA benchmark and is robust against various text manipulation techniques.

Cheng Wang, Yiwei Wang, Bryan Hooi, Yujun Cai, Nanyun Peng, Kai-Wei Chang• 2024

Related benchmarks

TaskDatasetResultRank
Suffix RankingExtraction Challenge Dataset
MP (%)50.4
66
Membership Inference AttackXSum (test)
AUC0.531
43
Membership Inference AttackAG News (test)
AUC0.525
43
Membership Inference AttackPubMed Central
AUC0.518
26
Membership Inference AttackHackerNews
AUC0.501
26
Membership Inference AttackGitHub
AUC0.549
26
Membership Inference AttackWikipedia en
AUC0.498
26
Membership Inference AttackPile-CC
AUC0.498
26
Membership Inference AttackarXiv
AUC50
26
Membership Inference AttackCC News
AUC0.513
14
Showing 10 of 22 rows

Other info

Follow for update