Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

When Visual Privacy Protection Meets Multimodal Large Language Models

About

The emergence of Multimodal Large Language Models (MLLMs) and the widespread usage of MLLM cloud services such as GPT-4V raised great concerns about privacy leakage in visual data. As these models are typically deployed in cloud services, users are required to submit their images and videos, posing serious privacy risks. However, how to tackle such privacy concerns is an under-explored problem. Thus, in this paper, we aim to conduct a new investigation to protect visual privacy when enjoying the convenience brought by MLLM services. We address the practical case where the MLLM is a "black box", i.e., we only have access to its input and output without knowing its internal model information. To tackle such a challenging yet demanding problem, we propose a novel framework, in which we carefully design the learning objective with Pareto optimality to seek a better trade-off between visual privacy and MLLM's performance, and propose critical-history enhanced optimization to effectively optimize the framework with the black-box MLLM. Our experiments show that our method is effective on different benchmarks.

Xiaofei Hui, Qian Wu, Haoxuan Qu, Majid Mirmehdi, Hossein Rahmani, Jun Liu• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringOK-VQA (test)
Accuracy57.6
327
Visual Question AnsweringOK-VQA
Accuracy59.2
260
Action RecognitionHMDB51 VISPR
Top-1 Accuracy54.3
24
Action RecognitionUCF101 VISPR
Top-1 Accuracy67.1
24
Privacy ProtectionHMDB51 VISPR
cMAP59.8
12
Privacy ProtectionUCF101 VISPR
cMAP54.6
12
Privacy ProtectionOK-VQA (test)
cMAP44.2
10
Privacy ProtectionVISPR (test)
cMAP48.8
10
Privacy RecognitionOK-VQA
cMAP44.2
10
Privacy RecognitionVISPR
cMAP47.9
10
Showing 10 of 10 rows

Other info

Follow for update