Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DUNE: Distilling a Universal Encoder from Heterogeneous 2D and 3D Teachers

About

Recent multi-teacher distillation methods have unified the encoders of multiple foundation models into a single encoder, achieving competitive performance on core vision tasks like classification, segmentation, and depth estimation. This led us to ask: Could similar success be achieved when the pool of teachers also includes vision models specialized in diverse tasks across both 2D and 3D perception? In this paper, we define and investigate the problem of heterogeneous teacher distillation, or co-distillation, a challenging multi-teacher distillation scenario where teacher models vary significantly in both (a) their design objectives and (b) the data they were trained on. We explore data-sharing strategies and teacher-specific encoding, and introduce DUNE, a single encoder excelling in 2D vision, 3D understanding, and 3D human perception. Our model achieves performance comparable to that of its larger teachers, sometimes even outperforming them, on their respective tasks. Notably, DUNE surpasses MASt3R in Map-free Visual Relocalization with a much smaller encoder.

Mert Bulent Sariyildiz, Philippe Weinzaepfel, Thomas Lucas, Pau de Jorge, Diane Larlus, Yannis Kalantidis• 2025

Related benchmarks

TaskDatasetResultRank
Semantic segmentationCityscapes
mIoU70.6
578
Monocular Depth EstimationNYU V2--
113
Semantic segmentationScanNet
mIoU65.2
59
Multi-view pose regressionCO3D v2
RRA@1592.2
31
Semantic segmentationADE20K
mIoU45.6
30
Multi-view Depth EstimationScanNet (test)
Abs Rel4.24
23
Multi-view pose regressionRealEstate10K
mAA(30)79.9
15
Semantic segmentationNYU V2
mIoU68.2
14
Multi-view Depth EstimationETH3D (test)
Relative Error (rel)2.48
9
Multi-view Depth EstimationTanks and Temples (T&T) (test)
Relative Error2.6
9
Showing 10 of 15 rows

Other info

Code

Follow for update