Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TCAP: Tri-Component Attention Profiling for Unsupervised Backdoor Detection in MLLM Fine-Tuning

About

Fine-Tuning-as-a-Service (FTaaS) facilitates the customization of Multimodal Large Language Models (MLLMs) but introduces critical backdoor risks via poisoned data. Existing defenses either rely on supervised signals or fail to generalize across diverse trigger types and modalities. In this work, we uncover a universal backdoor fingerprint-attention allocation divergence-where poisoned samples disrupt the balanced attention distribution across three functional components: system instructions, vision inputs, and user textual queries, regardless of trigger morphology. Motivated by this insight, we propose Tri-Component Attention Profiling (TCAP), an unsupervised defense framework to filter backdoor samples. TCAP decomposes cross-modal attention maps into the three components, identifies trigger-responsive attention heads via Gaussian Mixture Model (GMM) statistical profiling, and isolates poisoned samples through EM-based vote aggregation. Extensive experiments across diverse MLLM architectures and attack methods demonstrate that TCAP achieves consistently strong performance, establishing it as a robust and practical backdoor defense in MLLMs.

Mingzu Liu, Hao Fang, Runmin Cong• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal UnderstandingSEED-Bench--
203
Multimodal Question AnsweringRecap-COCO
CP65.94
15
Science Question AnsweringScienceQA
Correct Prediction Rate96.93
15
Multimodal ReasoningPhD
CP90.27
15
Showing 4 of 4 rows

Other info

Follow for update