Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Inhibitory Attacks on Backdoor-based Fingerprinting for Large Language Models

About

The widespread adoption of Large Language Model (LLM) in commercial and research settings has intensified the need for robust intellectual property protection. Backdoor-based LLM fingerprinting has emerged as a promising solution for this challenge. In practical application, the low-cost multi-model collaborative technique, LLM ensemble, combines diverse LLMs to leverage their complementary strengths, garnering significant attention and practical adoption. Unfortunately, the vulnerability of existing LLM fingerprinting for the ensemble scenario is unexplored. In order to comprehensively assess the robustness of LLM fingerprinting, in this paper, we propose two novel fingerprinting attack methods: token filter attack (TFA) and sentence verification attack (SVA). The TFA gets the next token from a unified set of tokens created by the token filter mechanism at each decoding step. The SVA filters out fingerprint responses through a sentence verification mechanism based on perplexity and voting. Experimentally, the proposed methods effectively inhibit the fingerprint response while maintaining ensemble performance. Compared with state-of-the-art attack methods, the proposed method can achieve better performance. The findings necessitate enhanced robustness in LLM fingerprinting.

Hang Fu, Wanli Peng, Yinghan Zhou, Jiaxuan Wu, Juan Wen, Yiming Xue• 2026

Related benchmarks

TaskDatasetResultRank
Fingerprint RemovalLLM Fingerprinting Evaluation Alpaca-GPT4-52k
ASR Error Rate90
66
FingerprintingCTCC fingerprinting
ASR (%)100
20
Attack Success RateCTCC fingerprinting scenario b
SVA100
18
Model Fingerprinting EvasionLLM Target Model Suite (test)
LLaMA 7B100
6
Showing 4 of 4 rows

Other info

Follow for update