Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

What Hard Tokens Reveal: Exploiting Low-confidence Tokens for Membership Inference Attacks against Large Language Models

About

With the widespread adoption of Large Language Models (LLMs) and increasingly stringent privacy regulations, protecting data privacy in LLMs has become essential, especially for privacy-sensitive applications. Membership Inference Attacks (MIAs) attempt to determine whether a specific data sample was included in the model training/fine-tuning dataset, posing serious privacy risks. However, most existing MIA techniques against LLMs rely on sequence-level aggregated prediction statistics, which fail to distinguish prediction improvements caused by generalization from those caused by memorization, leading to low attack effectiveness. To address this limitation, we propose a novel membership inference approach that captures the token-level probabilities for low-confidence (hard) tokens, where membership signals are more pronounced. By comparing token-level probability improvements at hard tokens between a fine-tuned target model and a pre-trained reference model, HT-MIA isolates strong and robust membership signals that are obscured by prior MIA approaches. Extensive experiments on both domain-specific medical datasets and general-purpose benchmarks demonstrate that HT-MIA consistently outperforms seven state-of-the-art MIA baselines. We further investigate differentially private training as an effective defense mechanism against MIAs in LLMs. Overall, our HT-MIA framework establishes hard-token based analysis as a state-of-the-art foundation for advancing membership inference attacks and defenses for LLMs.

Md Tasnim Jawad, Mingyan Xiao, Yanzhao Wu• 2026

Related benchmarks

TaskDatasetResultRank
Membership Inference AttackAsclepius (fine-tuned)
TPR@FPR=0.018.17
58
Membership Inference AttackClinicalnotes
TPR@FPR=0.010.4253
24
Membership Inference AttackClinicalnotes (test)
AUC0.8843
24
Membership Inference AttackClinicalnotes fine-tuned (test)
TPR@FPR=0.170.57
24
Membership Inference AttackClinicalnotes (fine-tuned)
AUC0.8843
5
Showing 5 of 5 rows

Other info

Follow for update