Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BARE: Towards Bias-Aware and Reasoning-Enhanced One-Tower Visual Grounding

About

Visual Grounding (VG), which aims to locate a specific region referred to by expressions, is a fundamental yet challenging task in the multimodal understanding fields. While recent grounding transfer works have advanced the field through one-tower architectures, they still suffer from two primary limitations: (1) over-entangled multimodal representations that exacerbate deceptive modality biases, and (2) insufficient semantic reasoning that hinders the comprehension of referential cues. In this paper, we propose BARE, a bias-aware and reasoning-enhanced framework for one-tower visual grounding. BARE introduces a mechanism that preserves modality-specific features and constructs referential semantics through three novel modules: (i) language salience modulator, (ii) visual bias correction and (iii) referential relationship enhancement, which jointly mitigate multimodal distractions and enhance referential comprehension. Extensive experimental results on five benchmarks demonstrate that BARE not only achieves state-of-the-art performance but also delivers superior computational efficiency compared to existing approaches. The code is publicly accessible at https://github.com/Marloweeee/BARE.

Hongbing Li, Linhui Xiao, Zihan Zhao, Qi Shen, Yixiang Huang, Bo Xiao, Zhanyu Ma• 2026

Related benchmarks

TaskDatasetResultRank
Referring Expression ComprehensionRefCOCO+ (val)
Accuracy88.36
345
Referring Expression ComprehensionRefCOCO (testA)--
333
Referring Expression ComprehensionRefCOCO (testB)
Accuracy90.78
196
Referring Expression ComprehensionRefCOCOg (val (U))
Accuracy90.58
57
Referring Expression ComprehensionRefCOCO v1 (val)
Top-1 Accuracy92.83
49
Phrase groundingReferIt (test)
Pointing Accuracy80.58
18
Phrase groundingFlickr30k (test)
Accuracy83.68
18
Showing 7 of 7 rows

Other info

Follow for update