Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

FF3R: Feedforward Feature 3D Reconstruction from Unconstrained views

About

Recent advances in vision foundation models have revolutionized geometry reconstruction and semantic understanding. Yet, most of the existing approaches treat these capabilities in isolation, leading to redundant pipelines and compounded errors. This paper introduces FF3R, a fully annotation-free feed-forward framework that unifies geometric and semantic reasoning from unconstrained multi-view image sequences. Unlike previous methods, FF3R does not require camera poses, depth maps, or semantic labels, relying solely on rendering supervision for RGB and feature maps, establishing a scalable paradigm for unified 3D reasoning. In addition, we address two critical challenges in feedforward feature reconstruction pipelines, namely global semantic inconsistency and local structural inconsistency, through two key innovations: (i) a Token-wise Fusion Module that enriches geometry tokens with semantic context via cross-attention, and (ii) a Semantic-Geometry Mutual Boosting mechanism combining geometry-guided feature warping for global consistency with semantic-aware voxelization for local coherence. Extensive experiments on ScanNet and DL3DV-10K demonstrate FF3R's superior performance in novel-view synthesis, open-vocabulary semantic segmentation, and depth estimation, with strong generalization to in-the-wild scenarios, paving the way for embodied intelligence systems that demand both spatial and semantic understanding.

Chaoyi Zhou, Run Wang, Feng Luo, Mert D. Pes\'e, Zhiwen Fan, Yiqi Zhong, Siyu Huang• 2026

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisScanNet
PSNR22.7
130
Novel View SynthesisDL3DV 10K (140)
SSIM0.608
11
Semantic segmentationDL3DV-10K
mIoU52.1
6
Depth ConsistencyScanNet
Relative Error3.36
4
Showing 4 of 4 rows

Other info

Follow for update