Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Long-LRM: Long-sequence Large Reconstruction Model for Wide-coverage Gaussian Splats

About

We propose Long-LRM, a feed-forward 3D Gaussian reconstruction model for instant, high-resolution, 360{\deg} wide-coverage, scene-level reconstruction. Specifically, it takes in 32 input images at a resolution of 960x540 and produces the Gaussian reconstruction in just 1 second on a single A100 GPU. To handle the long sequence of 250K tokens brought by the large input size, Long-LRM features a mixture of the recent Mamba2 blocks and the classical transformer blocks, enhanced by a light-weight token merging module and Gaussian pruning steps that balance between quality and efficiency. We evaluate Long-LRM on the large-scale DL3DV benchmark and Tanks&Temples, demonstrating reconstruction quality comparable to the optimization-based methods while achieving an 800x speedup w.r.t. the optimization-based approaches and an input size at least 60x larger than the previous feed-forward approaches. We conduct extensive ablation studies on our model design choices for both rendering quality and computation efficiency. We also explore Long-LRM's compatibility with other Gaussian variants such as 2D GS, which enhances Long-LRM's ability in geometry reconstruction. Project page: https://arthurhero.github.io/projects/llrm

Chen Ziwen, Hao Tan, Kai Zhang, Sai Bi, Fujun Luan, Yicong Hong, Li Fuxin, Zexiang Xu• 2024

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisTanks&Temples (test)
PSNR19.44
257
Novel View SynthesisRealEstate10K
PSNR28.54
173
Novel View SynthesisMip-NeRF 360
PSNR21.3
143
Novel View SynthesisTanks&Temples
PSNR19.44
95
Novel View SynthesisMip-NeRF360 (test)
PSNR21.3
62
Novel View SynthesisDL3DV (test)
PSNR23.97
61
3D ReconstructionTanks&Temples
PSNR19.11
42
Novel View SynthesisRealEstate-10K 2-view
PSNR28.54
32
Novel View SynthesisDL3DV (evaluation)
PSNR23.54
22
3D ReconstructionDL3DV-140
PSNR24.99
18
Showing 10 of 19 rows

Other info

Follow for update