Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TransVG: End-to-End Visual Grounding with Transformers

About

In this paper, we present a neat yet effective transformer-based framework for visual grounding, namely TransVG, to address the task of grounding a language query to the corresponding region onto an image. The state-of-the-art methods, including two-stage or one-stage ones, rely on a complex module with manually-designed mechanisms to perform the query reasoning and multi-modal fusion. However, the involvement of certain mechanisms in fusion module design, such as query decomposition and image scene graph, makes the models easily overfit to datasets with specific scenarios, and limits the plenitudinous interaction between the visual-linguistic context. To avoid this caveat, we propose to establish the multi-modal correspondence by leveraging transformers, and empirically show that the complex fusion modules e.g., modular attention network, dynamic graph, and multi-modal tree) can be replaced by a simple stack of transformer encoder layers with higher performance. Moreover, we re-formulate the visual grounding as a direct coordinates regression problem and avoid making predictions out of a set of candidates i.e., region proposals or anchor boxes). Extensive experiments are conducted on five widely used datasets, and a series of state-of-the-art records are set by our TransVG. We build the benchmark of transformer-based visual grounding framework and make the code available at \url{https://github.com/djiajunustc/TransVG}.

Jiajun Deng, Zhengyuan Yang, Tianlang Chen, Wengang Zhou, Houqiang Li• 2021

Related benchmarks

TaskDatasetResultRank
Referring Expression ComprehensionRefCOCO+ (val)
Accuracy68
345
Referring Expression ComprehensionRefCOCO (val)
Accuracy81.02
335
Referring Expression ComprehensionRefCOCO (testA)
Accuracy0.8338
333
Referring Expression ComprehensionRefCOCOg (test)
Accuracy68.71
291
Referring Expression ComprehensionRefCOCOg (val)
Accuracy68.67
291
Referring Expression ComprehensionRefCOCO+ (testB)
Accuracy59.24
235
Referring Expression ComprehensionRefCOCO+ (testA)
Accuracy72.46
207
Referring Expression ComprehensionRefCOCO (testB)
Accuracy78.4
196
Visual GroundingRefCOCO+ (testB)--
169
Visual GroundingRefCOCO+ (testA)--
168
Showing 10 of 68 rows

Other info

Follow for update