Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Visual Grounding with Multi-modal Conditional Adaptation

About

Visual grounding is the task of locating objects specified by natural language expressions. Existing methods extend generic object detection frameworks to tackle this task. They typically extract visual and textual features separately using independent visual and textual encoders, then fuse these features in a multi-modal decoder for final prediction. However, visual grounding presents unique challenges. It often involves locating objects with different text descriptions within the same image. Existing methods struggle with this task because the independent visual encoder produces identical visual features for the same image, limiting detection performance. Some recently approaches propose various language-guided visual encoders to address this issue, but they mostly rely solely on textual information and require sophisticated designs. In this paper, we introduce Multi-modal Conditional Adaptation (MMCA), which enables the visual encoder to adaptively update weights, directing its focus towards text-relevant regions. Specifically, we first integrate information from different modalities to obtain multi-modal embeddings. Then we utilize a set of weighting coefficients, which generated from the multimodal embeddings, to reorganize the weight update matrices and apply them to the visual encoder of the visual grounding model. Extensive experiments on four widely used datasets demonstrate that MMCA achieves significant improvements and state-of-the-art results. Ablation experiments further demonstrate the lightweight and efficiency of our method. Our source code is available at: https://github.com/Mr-Bigworth/MMCA.

Ruilin Yao, Shengwu Xiong, Yichen Zhao, Yi Rong• 2024

Related benchmarks

TaskDatasetResultRank
Visual GroundingRefFLIR 1.0 (val)
Accuracy @ 0.5 IoU50.16
29
Visual GroundingRefFLIR RGBT-Ground (test)
Accuracy @ 0.5 IoU48.97
10
Visual GroundingRefM3FD RGBT-Ground (val)
Acc@0.546.43
10
Visual GroundingRefM3FD RGBT-Ground (test)
Accuracy @ 0.547.83
10
Visual GroundingRefMFAD RGBT-Ground (test)
Accuracy @ 0.5 IoU54.41
10
Visual GroundingRefFLIR RGBT-Ground (val)
Acc@0.50.5493
10
Visual GroundingRefMFAD RGBT-Ground (val)
Acc@0.50.5362
10
Showing 7 of 7 rows

Other info

Follow for update