Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Long-Range Grouping Transformer for Multi-View 3D Reconstruction

About

Nowadays, transformer networks have demonstrated superior performance in many computer vision tasks. In a multi-view 3D reconstruction algorithm following this paradigm, self-attention processing has to deal with intricate image tokens including massive information when facing heavy amounts of view input. The curse of information content leads to the extreme difficulty of model learning. To alleviate this problem, recent methods compress the token number representing each view or discard the attention operations between the tokens from different views. Obviously, they give a negative impact on performance. Therefore, we propose long-range grouping attention (LGA) based on the divide-and-conquer principle. Tokens from all views are grouped for separate attention operations. The tokens in each group are sampled from all views and can provide macro representation for the resided view. The richness of feature learning is guaranteed by the diversity among different groups. An effective and efficient encoder can be established which connects inter-view features using LGA and extract intra-view features using the standard self-attention layer. Moreover, a novel progressive upsampling decoder is also designed for voxel generation with relatively high resolution. Hinging on the above, we construct a powerful transformer-based network, called LRGT. Experimental results on ShapeNet verify our method achieves SOTA accuracy in multi-view reconstruction. Code will be available at https://github.com/LiyingCV/Long-Range-Grouping-Transformer.

Liying Yang, Zhenwei Zhu, Xuxin Lin, Jian Nong, Yanyan Liang• 2023

Related benchmarks

TaskDatasetResultRank
Multi-view 3D ReconstructionShapeNet (test)
IoU0.7922
209
Single-view 3D ReconstructionPix3D (test)
IoU0.304
16
Showing 2 of 2 rows

Other info

Code

Follow for update