SPAGS: Sparse-View Articulated Object Reconstruction from Single State via Planar Gaussian Splatting
About
Articulated objects are ubiquitous in daily environments, and their 3D reconstruction holds great significance across various fields. However, existing articulated object reconstruction methods typically require costly inputs such as multi-stage and multi-view observations. To address the limitations, we propose a category-agnostic articulated object reconstruction framework via planar Gaussian Splatting, which only uses sparse-view RGB images from a single state. Specifically, we first introduce a Gaussian information field to perceive the optimal sparse viewpoints from candidate camera poses. To ensure precise geometric fidelity, we constrain traditional 3D Gaussians into planar primitives, facilitating accurate normal and depth estimation. The planar Gaussians are then optimized in a coarse-to-fine manner, regularized by depth smoothness and few-shot diffusion priors. Furthermore, we leverage a Vision-Language Model (VLM) via visual prompting to achieve open-vocabulary part segmentation and joint parameter estimation. Extensive experiments on both synthetic and real-world datasets demonstrate that our approach significantly outperforms existing baselines, achieving superior part-level surface reconstruction fidelity. Code and data are provided in the supplementary material.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Articulated Object Reconstruction | ArtGS-Multi | Axis Angle Error2.9 | 57 | |
| Mesh Reconstruction | Real-world articulated objects | Stapler Reconstruction Error0.06 | 12 | |
| Articulated Modeling | PARIS (Mean) | Axis Angle Error2.03 | 8 | |
| Novel View Synthesis | PARIS dataset | PSNR24.13 | 5 |