Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Do We Really Need Curated Malicious Data for Safety Alignment in Multi-modal Large Language Models?

About

Multi-modal large language models (MLLMs) have made significant progress, yet their safety alignment remains limited. Typically, current open-source MLLMs rely on the alignment inherited from their language module to avoid harmful generations. However, the lack of safety measures specifically designed for multi-modal inputs creates an alignment gap, leaving MLLMs vulnerable to vision-domain attacks such as typographic manipulation. Current methods utilize a carefully designed safety dataset to enhance model defense capability, while the specific knowledge or patterns acquired from the high-quality dataset remain unclear. Through comparison experiments, we find that the alignment gap primarily arises from data distribution biases, while image content, response quality, or the contrastive behavior of the dataset makes little contribution to boosting multi-modal safety. To further investigate this and identify the key factors in improving MLLM safety, we propose finetuning MLLMs on a small set of benign instruct-following data with responses replaced by simple, clear rejection sentences. Experiments show that, without the need for labor-intensive collection of high-quality malicious data, model safety can still be significantly improved, as long as a specific fraction of rejection data exists in the finetuning set, indicating the security alignment is not lost but rather obscured during multi-modal pretraining or instruction finetuning. Simply correcting the underlying data bias could narrow the safety gap in the vision domain.

Yanbo Wang, Jiyang Guan, Jian Liang, Ran He• 2025

Related benchmarks

TaskDatasetResultRank
Science Question AnsweringARC Challenge--
234
Jailbreak Safety EvaluationMM-Safety Bench (test)
Average ASR0.18
56
SafetySafety Evaluation Suite (Salad-Bench, WildJailbreak, JailbreakBench, WildChat, WildGuard)
Safety Rate (S.R.)100
24
Over-refusalOver-refusal Evaluation Suite (XSTest, WildJailbreak, WildGuard, OKTest, OR-Bench)
XSTest Refusal Rate (%)11.2
24
Visual Question AnsweringVizWizQA
Accuracy66.2
21
Science Question AnsweringGPQA Diamond
Avg@1 Score56.57
19
Multi-task Knowledge and ReasoningMMLU-Pro
Average Score @172.83
18
Mathematical Word Problem SolvingGSM8K
Pass@898.71
18
General ReasoningMATH-500, GPQA-D, MMLU-P, GSM8K, ARC-C Aggregate
Average Score83.22
18
Safety AlignmentXSTest
Compliance95.2
12
Showing 10 of 11 rows

Other info

Follow for update