Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

TrojVLM: Backdoor Attack Against Vision Language Models

About

The emergence of Vision Language Models (VLMs) is a significant advancement in integrating computer vision with Large Language Models (LLMs) to produce detailed text descriptions based on visual inputs, yet it introduces new security vulnerabilities. Unlike prior work that centered on single modalities or classification tasks, this study introduces TrojVLM, the first exploration of backdoor attacks aimed at VLMs engaged in complex image-to-text generation. Specifically, TrojVLM inserts predetermined target text into output text when encountering poisoned images. Moreover, a novel semantic preserving loss is proposed to ensure the semantic integrity of the original image content. Our evaluation on image captioning and visual question answering (VQA) tasks confirms the effectiveness of TrojVLM in maintaining original semantic content while triggering specific target text outputs. This study not only uncovers a critical security risk in VLMs and image-to-text generation but also sets a foundation for future research on securing multimodal models against such sophisticated threats.

Weimin Lyu, Lu Pang, Tengfei Ma, Haibin Ling, Chao Chen• 2024

Related benchmarks

TaskDatasetResultRank
Vision Question AnsweringOKVQA
ASR (Success Rate)69.41
30
Image CaptioningFlickr8K
BLEU@427.52
20
Vision Question AnsweringVQA v2
ASR51.99
10
Image CaptioningFlickr30K
BLEU@415.81
10
Showing 4 of 4 rows

Other info

Follow for update