Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OpenRooms: An End-to-End Open Framework for Photorealistic Indoor Scene Datasets

About

We propose a novel framework for creating large-scale photorealistic datasets of indoor scenes, with ground truth geometry, material, lighting and semantics. Our goal is to make the dataset creation process widely accessible, transforming scans into photorealistic datasets with high-quality ground truth for appearance, layout, semantic labels, high quality spatially-varying BRDF and complex lighting, including direct, indirect and visibility components. This enables important applications in inverse rendering, scene understanding and robotics. We show that deep networks trained on the proposed dataset achieve competitive performance for shape, material and lighting estimation on real images, enabling photorealistic augmented reality applications, such as object insertion and material editing. We also show our semantic labels may be used for segmentation and multi-task learning. Finally, we demonstrate that our framework may also be integrated with physics engines, to create virtual robotics environments with unique ground truth such as friction coefficients and correspondence to real scenes. The dataset and all the tools to create such datasets will be made publicly available.

Zhengqin Li, Ting-Wei Yu, Shen Sang, Sarah Wang, Meng Song, Yuhan Liu, Yu-Ying Yeh, Rui Zhu, Nitesh Gundavarapu, Jia Shi, Sai Bi, Zexiang Xu, Hong-Xing Yu, Kalyan Sunkavalli, Milo\v{s} Ha\v{s}an, Ravi Ramamoorthi, Manmohan Chandraker• 2020

Related benchmarks

TaskDatasetResultRank
Surface Normal PredictionNYU V2
Mean Error25.3
100
Lighting EstimationOpenRooms synthetic (test)
Lighting Recon Error (L)18.61
7
BRDF EstimationOpenRooms synthetic (test)
Albedo Error0.48
6
Intrinsic DecompositionIIW 5 (test)
WHDR16.4
6
Geometry EstimationOpenRooms synthetic (test)
Depth Error (D)1.91
6
Depth PredictionNYU V2
Depth Error0.171
4
Showing 6 of 6 rows

Other info

Follow for update