Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SPA-VL: A Comprehensive Safety Preference Alignment Dataset for Vision Language Model

About

The emergence of Vision Language Models (VLMs) has brought unprecedented advances in understanding multimodal information. The combination of textual and visual semantics in VLMs is highly complex and diverse, making the safety alignment of these models challenging. Furthermore, due to the limited study on the safety alignment of VLMs, there is a lack of large-scale, high-quality datasets. To address these limitations, we propose a Safety Preference Alignment dataset for Vision Language Models named SPA-VL. In terms of breadth, SPA-VL covers 6 harmfulness domains, 13 categories, and 53 subcategories, and contains 100,788 samples of the quadruple (question, image, chosen response, rejected response). In terms of depth, the responses are collected from 12 open-source (e.g., QwenVL) and closed-source (e.g., Gemini) VLMs to ensure diversity. The construction of preference data is fully automated, and the experimental results indicate that models trained with alignment techniques on the SPA-VL dataset exhibit substantial improvements in harmlessness and helpfulness while maintaining core capabilities. SPA-VL, as a large-scale, high-quality, and diverse dataset, represents a significant milestone in ensuring that VLMs achieve both harmlessness and helpfulness.

Yongting Zhang, Lu Chen, Guodong Zheng, Yifeng Gao, Rui Zheng, Jinlan Fu, Zhenfei Yin, Senjie Jin, Yu Qiao, Xuanjing Huang, Feng Zhao, Tao Gui, Jing Shao• 2024

Related benchmarks

TaskDatasetResultRank
Safety EvaluationMM-SafetyBench
Average ASR27.56
42
Safety EvaluationJailBreakV
ASR21.46
15
Moral AlignmentMM-SCALE (test)
NDCG@50.86
12
Safety EvaluationMMSafe-PO
Helpfulness48.05
12
Safety EvaluationSafeMT
Helpfulness48.36
12
Showing 5 of 5 rows

Other info

Follow for update