Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VPNeXt -- Rethinking Dense Decoding for Plain Vision Transformer

About

We present VPNeXt, a new and simple model for the Plain Vision Transformer (ViT). Unlike the many related studies that share the same homogeneous paradigms, VPNeXt offers a fresh perspective on dense representation based on ViT. In more detail, the proposed VPNeXt addressed two concerns about the existing paradigm: (1) Is it necessary to use a complex Transformer Mask Decoder architecture to obtain good representations? (2) Does the Plain ViT really need to depend on the mock pyramid feature for upsampling? For (1), we investigated the potential underlying reasons that contributed to the effectiveness of the Transformer Decoder and introduced the Visual Context Replay (VCR) to achieve similar effects efficiently. For (2), we introduced the ViTUp module. This module fully utilizes the previously overlooked ViT real pyramid feature to achieve better upsampling results compared to the earlier mock pyramid feature. This represents the first instance of such functionality in the field of semantic segmentation for Plain ViT. We performed ablation studies on related modules to verify their effectiveness gradually. We conducted relevant comparative experiments and visualizations to show that VPNeXt achieved state-of-the-art performance with a simple and effective design. Moreover, the proposed VPNeXt significantly exceeded the long-established mIoU wall/barrier of the VOC2012 dataset, setting a new state-of-the-art by a large margin, which also stands as the largest improvement since 2015.

Xikai Tang, Ye Huang, Guangqiang Yin, Lixin Duan• 2025

Related benchmarks

TaskDatasetResultRank
Semantic segmentationPASCAL VOC 2012 (test)--
1342
Semantic segmentationCityscapes (val)--
572
Semantic segmentationPascal Context (test)--
176
Semantic segmentationCOCOStuff 164k (val)--
41
Showing 4 of 4 rows

Other info

Code

Follow for update