Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Parallel Vertex Diffusion for Unified Visual Grounding

About

Unified visual grounding pursues a simple and generic technical route to leverage multi-task data with less task-specific design. The most advanced methods typically present boxes and masks as vertex sequences to model referring detection and segmentation as an autoregressive sequential vertex generation paradigm. However, generating high-dimensional vertex sequences sequentially is error-prone because the upstream of the sequence remains static and cannot be refined based on downstream vertex information, even if there is a significant location gap. Besides, with limited vertexes, the inferior fitting of objects with complex contours restricts the performance upper bound. To deal with this dilemma, we propose a parallel vertex generation paradigm for superior high-dimension scalability with a diffusion model by simply modifying the noise dimension. An intuitive materialization of our paradigm is Parallel Vertex Diffusion (PVD) to directly set vertex coordinates as the generation target and use a diffusion model to train and infer. We claim that it has two flaws: (1) unnormalized coordinate caused a high variance of loss value; (2) the original training objective of PVD only considers point consistency but ignores geometry consistency. To solve the first flaw, Center Anchor Mechanism (CAM) is designed to convert coordinates as normalized offset values to stabilize the training loss value. For the second flaw, Angle summation loss (ASL) is designed to constrain the geometry difference of prediction and ground truth vertexes for geometry-level consistency. Empirical results show that our PVD achieves state-of-the-art in both referring detection and segmentation, and our paradigm is more scalable and efficient than sequential vertex generation with high-dimension data.

Zesen Cheng, Kehan Li, Peng Jin, Xiangyang Ji, Li Yuan, Chang Liu, Jie Chen• 2023

Related benchmarks

TaskDatasetResultRank
Referring Expression ComprehensionRefCOCO+ (val)
Accuracy73.89
345
Referring Expression ComprehensionRefCOCO (val)
Accuracy84.52
335
Referring Expression ComprehensionRefCOCO (testA)
Accuracy0.8764
333
Referring Expression ComprehensionRefCOCO+ (testA)
Accuracy78.41
207
Referring Image SegmentationRefCOCO+ (test-B)
mIoU56.92
200
Referring Image SegmentationRefCOCO (val)
mIoU74.82
197
Referring Expression ComprehensionRefCOCO (testB)
Accuracy79.63
196
Referring Image SegmentationRefCOCO (test A)
mIoU77.11
178
Referring Expression ComprehensionRefCOCO+ (test-B)
Accuracy64.25
167
Referring Image SegmentationRefCOCO (test-B)
mIoU69.52
119
Showing 10 of 25 rows

Other info

Follow for update