Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SeqTR: A Simple yet Universal Network for Visual Grounding

About

In this paper, we propose a simple yet universal network termed SeqTR for visual grounding tasks, e.g., phrase localization, referring expression comprehension (REC) and segmentation (RES). The canonical paradigms for visual grounding often require substantial expertise in designing network architectures and loss functions, making them hard to generalize across tasks. To simplify and unify the modeling, we cast visual grounding as a point prediction problem conditioned on image and text inputs, where either the bounding box or binary mask is represented as a sequence of discrete coordinate tokens. Under this paradigm, visual grounding tasks are unified in our SeqTR network without task-specific branches or heads, e.g., the convolutional mask decoder for RES, which greatly reduces the complexity of multi-task modeling. In addition, SeqTR also shares the same optimization objective for all tasks with a simple cross-entropy loss, further reducing the complexity of deploying hand-crafted loss functions. Experiments on five benchmark datasets demonstrate that the proposed SeqTR outperforms (or is on par with) the existing state-of-the-arts, proving that a simple yet universal approach for visual grounding is indeed feasible. Source code is available at https://github.com/sean-zhuh/SeqTR.

Chaoyang Zhu, Yiyi Zhou, Yunhang Shen, Gen Luo, Xingjia Pan, Mingbao Lin, Chao Chen, Liujuan Cao, Xiaoshuai Sun, Rongrong Ji• 2022

Related benchmarks

TaskDatasetResultRank
Referring Expression ComprehensionRefCOCO+ (val)
Accuracy78.69
354
Referring Expression ComprehensionRefCOCO (val)
Accuracy87
344
Referring Expression ComprehensionRefCOCO (testA)
Accuracy0.9015
342
Referring Expression ComprehensionRefCOCOg (test)
Accuracy83.37
300
Referring Expression ComprehensionRefCOCOg (val)
Accuracy82.69
300
Referring Image SegmentationRefCOCO (val)
mIoU71.7
259
Referring Expression SegmentationRefCOCO (testA)--
257
Referring Image SegmentationRefCOCO+ (test-B)
mIoU58.97
252
Referring Expression ComprehensionRefCOCO+ (testB)
Accuracy71.87
244
Referring Image SegmentationRefCOCO (test A)
mIoU73.31
230
Showing 10 of 81 rows
...

Other info

Code

Follow for update