Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Label-Only Membership Inference Attack against Pre-trained Large Language Models

About

Membership Inference Attacks (MIAs) aim to predict whether a data sample belongs to the model's training set or not. Although prior research has extensively explored MIAs in Large Language Models (LLMs), they typically require accessing to complete output logits (\ie, \textit{logits-based attacks}), which are usually not available in practice. In this paper, we study the vulnerability of pre-trained LLMs to MIAs in the \textit{label-only setting}, where the adversary can only access generated tokens (text). We first reveal that existing label-only MIAs have minor effects in attacking pre-trained LLMs, although they are highly effective in inferring fine-tuning datasets used for personalized LLMs. We find that their failure stems from two main reasons, including better generalization and overly coarse perturbation. Specifically, due to the extensive pre-training corpora and exposing each sample only a few times, LLMs exhibit minimal robustness differences between members and non-members. This makes token-level perturbations too coarse to capture such differences. To alleviate these problems, we propose \textbf{PETAL}: a label-only membership inference attack based on \textbf{PE}r-\textbf{T}oken sem\textbf{A}ntic simi\textbf{L}arity. Specifically, PETAL leverages token-level semantic similarity to approximate output probabilities and subsequently calculate the perplexity. It finally exposes membership based on the common assumption that members are `better' memorized and have smaller perplexity. We conduct extensive experiments on the WikiMIA benchmark and the more challenging MIMIR benchmark. Empirically, our PETAL performs better than the extensions of existing label-only attacks against personalized LLMs and even on par with other advanced logit-based attacks across all metrics on five prevalent open-source LLMs.

Yu He, Boheng Li, Liu Liu, Zhongjie Ba, Wei Dong, Yiming Li, Zhan Qin, Kui Ren, Chun Chen• 2025

Related benchmarks

TaskDatasetResultRank
Membership InferenceWikiMIA 32 tokens 1.0
ROC AUC64.1
66
Membership Inference AttackWikipedia
AUC0.637
52
Membership Inference AttackPile CC Pythia
ROC AUC55
36
Membership Inference AttackDM Math Pythia
ROC AUC87
36
Membership InferenceGitHub Pythia (train)
TPR@1%FPR44.4
36
Membership Inference AttackWikipedia Pythia
ROC AUC62
36
Membership Inference AttackGitHub Pythia
ROC AUC0.87
36
Membership Inference AttackHackerNews Pythia
ROC AUC0.59
36
Membership InferenceWikipedia Pythia (train)
TPR@1%FPR4
36
Membership Inference AttackPubMed Pythia
ROC AUC66
36
Showing 10 of 14 rows

Other info

Follow for update