Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes

About

Novel view synthesis for dynamic scenes is still a challenging problem in computer vision and graphics. Recently, Gaussian splatting has emerged as a robust technique to represent static scenes and enable high-quality and real-time novel view synthesis. Building upon this technique, we propose a new representation that explicitly decomposes the motion and appearance of dynamic scenes into sparse control points and dense Gaussians, respectively. Our key idea is to use sparse control points, significantly fewer in number than the Gaussians, to learn compact 6 DoF transformation bases, which can be locally interpolated through learned interpolation weights to yield the motion field of 3D Gaussians. We employ a deformation MLP to predict time-varying 6 DoF transformations for each control point, which reduces learning complexities, enhances learning abilities, and facilitates obtaining temporal and spatial coherent motion patterns. Then, we jointly learn the 3D Gaussians, the canonical space locations of control points, and the deformation MLP to reconstruct the appearance, geometry, and dynamics of 3D scenes. During learning, the location and number of control points are adaptively adjusted to accommodate varying motion complexities in different regions, and an ARAP loss following the principle of as rigid as possible is developed to enforce spatial continuity and local rigidity of learned motions. Finally, thanks to the explicit sparse motion representation and its decomposition from appearance, our method can enable user-controlled motion editing while retaining high-fidelity appearances. Extensive experiments demonstrate that our approach outperforms existing approaches on novel view synthesis with a high rendering speed and enables novel appearance-preserved motion editing applications. Project page: https://yihua7.github.io/SC-GS-web/

Yi-Hua Huang, Yang-Tian Sun, Ziyi Yang, Xiaoyang Lyu, Yan-Pei Cao, Xiaojuan Qi• 2023

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisD-NeRF synthetic (test)
Average PSNR43.31
42
Novel View SynthesisNeRF-DS
Average PSNR24.1
39
Rendering PerformanceTUM
Quality Score (fr3/sit_xyz)21.45
30
Novel View ReconstructionHyperNeRF held-out 4D LangSplat (test)
Americano Score31.39
20
Novel View ReconstructionHyperNeRF 4D LangSplat (test)
Americano Score93
20
Novel View SynthesisHyperNeRF (test)
PSNR26.95
18
Dynamic 3D ReconstructionHyperNeRF (test)
PSNR21.2
18
Dynamic Novel View SynthesisDyCheck 5 scenes 1.0
mPSNR14.13
16
Dynamic Surface ReconstructionCMU Panoptic (Pizza1)
Accuracy11.7
12
Dynamic Surface ReconstructionCMU Panoptic (Band1)
Accuracy12.2
12
Showing 10 of 63 rows

Other info

Code

Follow for update