Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MonoViT: Self-Supervised Monocular Depth Estimation with a Vision Transformer

About

Self-supervised monocular depth estimation is an attractive solution that does not require hard-to-source depth labels for training. Convolutional neural networks (CNNs) have recently achieved great success in this task. However, their limited receptive field constrains existing network architectures to reason only locally, dampening the effectiveness of the self-supervised paradigm. In the light of the recent successes achieved by Vision Transformers (ViTs), we propose MonoViT, a brand-new framework combining the global reasoning enabled by ViT models with the flexibility of self-supervised monocular depth estimation. By combining plain convolutions with Transformer blocks, our model can reason locally and globally, yielding depth prediction at a higher level of detail and accuracy, allowing MonoViT to achieve state-of-the-art performance on the established KITTI dataset. Moreover, MonoViT proves its superior generalization capacities on other datasets such as Make3D and DrivingStereo.

Chaoqiang Zhao, Youmin Zhang, Matteo Poggi, Fabio Tosi, Xianda Guo, Zheng Zhu, Guan Huang, Yang Tang, Stefano Mattoccia• 2022

Related benchmarks

TaskDatasetResultRank
Monocular Depth EstimationKITTI (Eigen)
Abs Rel0.102
502
Depth EstimationKITTI (Eigen split)
RMSE4.372
276
Monocular Depth EstimationKITTI (Eigen split)
Abs Rel0.093
193
Monocular Depth EstimationKITTI
Abs Rel0.099
161
Monocular Depth EstimationMake3D (test)
Abs Rel0.286
132
Monocular Depth EstimationDDAD (test)
RMSE11.777
122
Monocular Depth EstimationKITTI Improved GT (Eigen)
AbsRel0.068
92
Monocular Depth EstimationKITTI improved ground truth (Eigen split)
Abs Rel0.067
65
Monocular Depth EstimationCityscapes
Accuracy (delta < 1.25)88.1
62
Monocular Depth EstimationKITTI Eigen (test)
AbsRel0.099
46
Showing 10 of 24 rows

Other info

Code

Follow for update