Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploiting Modality-Specific Features For Multi-Modal Manipulation Detection And Grounding

About

AI-synthesized text and images have gained significant attention, particularly due to the widespread dissemination of multi-modal manipulations on the internet, which has resulted in numerous negative impacts on society. Existing methods for multi-modal manipulation detection and grounding primarily focus on fusing vision-language features to make predictions, while overlooking the importance of modality-specific features, leading to sub-optimal results. In this paper, we construct a simple and novel transformer-based framework for multi-modal manipulation detection and grounding tasks. Our framework simultaneously explores modality-specific features while preserving the capability for multi-modal alignment. To achieve this, we introduce visual/language pre-trained encoders and dual-branch cross-attention (DCA) to extract and fuse modality-unique features. Furthermore, we design decoupled fine-grained classifiers (DFC) to enhance modality-specific feature mining and mitigate modality competition. Moreover, we propose an implicit manipulation query (IMQ) that adaptively aggregates global contextual cues within each modality using learnable queries, thereby improving the discovery of forged details. Extensive experiments on the $\rm DGM^4$ dataset demonstrate the superior performance of our proposed model compared to state-of-the-art approaches.

Jiazhen Wang, Bin Liu, Changtao Miao, Zhiwei Zhao, Wanyi Zhuang, Qi Chu, Nenghai Yu• 2023

Related benchmarks

TaskDatasetResultRank
Multi-Label ClassificationDGM4 Entire Dataset 1.0 (test)
mAP91.42
15
Image GroundingDGM4 Image Sub-dataset 1.0 (test)
IoU Mean80.83
15
Binary ClassificationDGM4 Entire Dataset 1.0 (test)
AUC95.11
8
Text GroundingDGM4 Entire Dataset 1.0 (test)
PR76.51
8
Showing 4 of 4 rows

Other info

Follow for update