Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Off-the-shelf Vision Models Benefit Image Manipulation Localization

About

Image manipulation localization (IML) and general vision tasks are typically treated as two separate research directions due to the fundamental differences between manipulation-specific and semantic features. In this paper, however, we bridge this gap by introducing a fresh perspective: these two directions are intrinsically connected, and general semantic priors can benefit IML. Building on this insight, we propose a novel trainable adapter (named ReVi) that repurposes existing off-the-shelf general-purpose vision models (e.g., image generation and segmentation networks) for IML. Inspired by robust principal component analysis, the adapter disentangles semantic redundancy from manipulation-specific information embedded in these models and selectively enhances the latter. Unlike existing IML methods that require extensive model redesign and full retraining, our method relies on the off-the-shelf vision models with frozen parameters and only fine-tunes the proposed adapter. The experimental results demonstrate the superiority of our method, showing the potential for scalable IML frameworks.

Zhengxuan Zhang, Keji Song, Junmin Hu, Ao Luo, Yuezun Li• 2026

Related benchmarks

TaskDatasetResultRank
Image Manipulation LocalizationNIST16
F1 Score88.52
75
Image Manipulation LocalizationCoverage
F1 Score70.01
49
Image Manipulation LocalizationColumbia
F1 Score98.82
42
Image Manipulation LocalizationCASIA v1
F1 Score61.02
36
Image Manipulation LocalizationIMD20
F1 Score52.76
24
Showing 5 of 5 rows

Other info

Follow for update