Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment

About

Multimodal large language models (MLLMs) remain vulnerable to transferable adversarial examples. While existing methods typically achieve targeted attacks by aligning global features-such as CLIP's [CLS] token-between adversarial and target samples, they often overlook the rich local information encoded in patch tokens. This leads to suboptimal alignment and limited transferability, particularly for closed-source models. To address this limitation, we propose a targeted transferable adversarial attack method based on feature optimal alignment, called FOA-Attack, to improve adversarial transfer capability. Specifically, at the global level, we introduce a global feature loss based on cosine similarity to align the coarse-grained features of adversarial samples with those of target samples. At the local level, given the rich local representations within Transformers, we leverage clustering techniques to extract compact local patterns to alleviate redundant local features. We then formulate local feature alignment between adversarial and target samples as an optimal transport (OT) problem and propose a local clustering optimal transport loss to refine fine-grained feature alignment. Additionally, we propose a dynamic ensemble model weighting strategy to adaptively balance the influence of multiple models during adversarial example generation, thereby further improving transferability. Extensive experiments across various models demonstrate the superiority of the proposed method, outperforming state-of-the-art methods, especially in transferring to closed-source MLLMs. The code is released at https://github.com/jiaxiaojunQAQ/FOA-Attack.

Xiaojun Jia, Sensen Gao, Simeng Qin, Tianyu Pang, Chao Du, Yihao Huang, Xinfeng Li, Yiming Li, Bo Li, Yang Liu• 2025

Related benchmarks

TaskDatasetResultRank
Adversarial AttackNIPS Adversarial Attacks and Defenses Competition dataset 2017
ASR62
25
Universal Targeted Adversarial AttackSeen Samples (Used for Optimization) (train)
KMRa18.6
18
Mobile GUI AutomationPrivScreen
Accuracy80
18
Universal Targeted Adversarial AttackUnseen (test)
KMRa5.6
18
Black-box Adversarial AttackGPT-5
KMRa90
9
Imperceptibility EvaluationBlack-Box LVLM Attack Set
L1 Distance0.031
9
Black-box Adversarial AttackGemini 2.5-Pro
KMRa0.61
9
Black-box Adversarial AttackClaude thinking 4.0
KMR (a)0.13
9
Black-box Adversarial AttackQwen VL 2.5
KMR (a)83
6
Black-box Adversarial AttackLLaVa 1.5
KMR (a)0.94
6
Showing 10 of 16 rows

Other info

Follow for update