Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Automating Steering for Safe Multimodal Large Language Models

About

Recent progress in Multimodal Large Language Models (MLLMs) has unlocked powerful cross-modal reasoning abilities, but also raised new safety concerns, particularly when faced with adversarial multimodal inputs. To improve the safety of MLLMs during inference, we introduce a modular and adaptive inference-time intervention technology, AutoSteer, without requiring any fine-tuning of the underlying model. AutoSteer incorporates three core components: (1) a novel Safety Awareness Score (SAS) that automatically identifies the most safety-relevant distinctions among the model's internal layers; (2) an adaptive safety prober trained to estimate the likelihood of toxic outputs from intermediate representations; and (3) a lightweight Refusal Head that selectively intervenes to modulate generation when safety risks are detected. Experiments on LLaVA-OV and Chameleon across diverse safety-critical benchmarks demonstrate that AutoSteer significantly reduces the Attack Success Rate (ASR) for textual, visual, and cross-modal threats, while maintaining general abilities. These findings position AutoSteer as a practical, interpretable, and effective framework for safer deployment of multimodal AI systems.

Lyucheng Wu, Mengru Wang, Ziwen Xu, Tri Cao, Nay Oo, Bryan Hooi, Shumin Deng• 2025

Related benchmarks

TaskDatasetResultRank
Science Question AnsweringScienceQA--
502
Multimodal ReasoningMM-Vet
MM-Vet Score47.5
431
Visual Question AnsweringGQA
Score61.9
193
Multimodal EvaluationMM-Vet
Score35.5
180
Over-refusalXSTest
Overrefusal Rate62.4
78
Multimodal EvaluationMME
MME-P Score1.59e+3
73
Safety EvaluationMM-Safety
ASR15.2
57
Safety EvaluationSPA-VL
ASR0.2
40
Safety AlignmentJOOD
ASR1.1
40
Safety AlignmentVisual Adversarial Attacks
ASR19.8
40
Showing 10 of 15 rows

Other info

Follow for update