Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Neural Point-Based Graphics

About

We present a new point-based approach for modeling the appearance of real scenes. The approach uses a raw point cloud as the geometric representation of a scene, and augments each point with a learnable neural descriptor that encodes local geometry and appearance. A deep rendering network is learned in parallel with the descriptors, so that new views of the scene can be obtained by passing the rasterizations of a point cloud from new viewpoints through this network. The input rasterizations use the learned descriptors as point pseudo-colors. We show that the proposed approach can be used for modeling complex scenes and obtaining their photorealistic views, while avoiding explicit surface estimation and meshing. In particular, compelling results are obtained for scene scanned using hand-held commodity RGB-D sensors as well as standard RGB cameras even in the presence of objects that are challenging for standard mesh-based modeling.

Kara-Ali Aliev, Artem Sevastopolsky, Maria Kolos, Dmitry Ulyanov, Victor Lempitsky• 2019

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisNeRF Synthetic
PSNR24.56
92
Novel View SynthesisScanNet 11 (test)
PSNR25.09
16
Novel View SynthesisH3DS (holdout frames)
PSNR24.68
9
Novel View SynthesisDTU (holdout frames)
PSNR26
9
Novel View SynthesisNeRF-Synthetic (holdout frames)
PSNR28.62
9
Novel View SynthesisKITTI Road every 100 frames w/ discard (test)
VGG Score791.4
5
Novel View SynthesisKITTI City every 100 frames w/ discard (test)
VGG994.5
5
Novel View SynthesisKITTI Residential every 10 frames w/o discard (test)
VGG Score621.2
5
Novel View SynthesisKITTI Road every 10 frames w/o discard (test)
VGG Score597.3
5
Novel View SynthesisKITTI City every 10 frames w/o discard (test)
VGG Score632.8
5
Showing 10 of 13 rows

Other info

Follow for update