Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VGGT: Visual Geometry Grounded Transformer

About

We present VGGT, a feed-forward neural network that directly infers all key 3D attributes of a scene, including camera parameters, point maps, depth maps, and 3D point tracks, from one, a few, or hundreds of its views. This approach is a step forward in 3D computer vision, where models have typically been constrained to and specialized for single tasks. It is also simple and efficient, reconstructing images in under one second, and still outperforming alternatives that require post-processing with visual geometry optimization techniques. The network achieves state-of-the-art results in multiple 3D tasks, including camera parameter estimation, multi-view depth estimation, dense point cloud reconstruction, and 3D point tracking. We also show that using pretrained VGGT as a feature backbone significantly enhances downstream tasks, such as non-rigid point tracking and feed-forward novel view synthesis. Code and models are publicly available at https://github.com/facebookresearch/vggt.

Jianyuan Wang, Minghao Chen, Nikita Karaev, Andrea Vedaldi, Christian Rupprecht, David Novotny• 2025

Related benchmarks

TaskDatasetResultRank
Monocular Depth EstimationKITTI
Abs Rel0.082
161
Monocular Depth EstimationETH3D
AbsRel3.64
117
Monocular Depth EstimationNYU V2
Delta 1 Acc98.3
113
Video Depth EstimationSintel
Relative Error (Rel)0.202
109
Video Depth EstimationBONN
Relative Error (Rel)0.049
103
Depth EstimationScanNet
AbsRel1.9
94
Monocular Depth EstimationDIODE
AbsRel5.24
93
Camera pose estimationSintel
ATE0.167
92
Camera pose estimationScanNet
ATE RMSE (Avg.)0.023
61
Camera pose estimationTUM dynamics
RRE0.31
57
Showing 10 of 221 rows
...

Other info

Code

Follow for update