Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Argus: Vision-Centric Reasoning with Grounded Chain-of-Thought

About

Recent advances in multimodal large language models (MLLMs) have demonstrated remarkable capabilities in vision-language tasks, yet they often struggle with vision-centric scenarios where precise visual focus is needed for accurate reasoning. In this paper, we introduce Argus to address these limitations with a new visual attention grounding mechanism. Our approach employs object-centric grounding as visual chain-of-thought signals, enabling more effective goal-conditioned visual attention during multimodal reasoning tasks. Evaluations on diverse benchmarks demonstrate that Argus excels in both multimodal reasoning tasks and referring object grounding tasks. Extensive analysis further validates various design choices of Argus, and reveals the effectiveness of explicit language-guided visual region-of-interest engagement in MLLMs, highlighting the importance of advancing multimodal intelligence from a visual-centric perspective. Project page: https://yunzeman.github.io/argus/

Yunze Man, De-An Huang, Guilin Liu, Shiwei Sheng, Shilong Liu, Liang-Yan Gui, Jan Kautz, Yu-Xiong Wang, Zhiding Yu• 2025

Related benchmarks

TaskDatasetResultRank
Referring Expression GroundingRefCOCO (testA)
Accuracy92.9
41
Referring Expression GroundingRefCOCO (testB)
Accuracy85.4
41
Referring Expression GroundingRefCOCOg (test)
Accuracy85.2
37
Referring Expression GroundingRefCOCO+ (testA)--
23
Referring Expression GroundingRefCOCO+ (testB)--
23
Referring Expression GroundingRefCOCO+ (val)
Acc@0.584.7
14
Referring Expression GroundingRefCOCOg (val)
Accuracy (IoU=0.5)86.7
14
Referring Expression GroundingRefCOCO UMD (val)
Acc@0.589.8
14
Showing 8 of 8 rows

Other info

Code

Follow for update