Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VGGT: Visual Geometry Grounded Transformer

About

We present VGGT, a feed-forward neural network that directly infers all key 3D attributes of a scene, including camera parameters, point maps, depth maps, and 3D point tracks, from one, a few, or hundreds of its views. This approach is a step forward in 3D computer vision, where models have typically been constrained to and specialized for single tasks. It is also simple and efficient, reconstructing images in under one second, and still outperforming alternatives that require post-processing with visual geometry optimization techniques. The network achieves state-of-the-art results in multiple 3D tasks, including camera parameter estimation, multi-view depth estimation, dense point cloud reconstruction, and 3D point tracking. We also show that using pretrained VGGT as a feature backbone significantly enhances downstream tasks, such as non-rigid point tracking and feed-forward novel view synthesis. Code and models are publicly available at https://github.com/facebookresearch/vggt.

Jianyuan Wang, Minghao Chen, Nikita Karaev, Andrea Vedaldi, Christian Rupprecht, David Novotny• 2025

Related benchmarks

TaskDatasetResultRank
Depth EstimationNYU v2 (test)
Threshold Accuracy (delta < 1.25)98.9
432
Monocular Depth EstimationKITTI
Abs Rel0.082
203
Video Depth EstimationSintel
Delta Threshold Accuracy (1.25)72.7
193
Camera pose estimationSintel
ATE0.167
192
Camera pose estimationTUM-dynamic
ATE0.0109
163
Relative Pose EstimationMegaDepth 1500
AUC @ 20°80.71
151
Novel View SynthesisMip-NeRF360
PSNR26.4
138
Monocular Depth EstimationETH3D
AbsRel3.64
132
Monocular Depth EstimationNYU V2
Delta 1 Acc98.3
131
Video Depth EstimationKITTI
Abs Rel0.052
126
Showing 10 of 498 rows
...

Other info

Code

Follow for update