Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PocketGS: On-Device Training of 3D Gaussian Splatting for High Perceptual Modeling

About

Efficient and high-fidelity 3D scene modeling is a long-standing pursuit in computer graphics. While recent 3D Gaussian Splatting (3DGS) methods achieve impressive real-time modeling performance, they rely on resource-unconstrained training assumptions that fail on mobile devices, which are limited by minute-scale training budgets and hardware-available peak-memory. We present PocketGS, a mobile scene modeling paradigm that enables on-device 3DGS training under these tightly coupled constraints while preserving high perceptual fidelity. Our method resolves the fundamental contradictions of standard 3DGS through three co-designed operators: G builds geometry-faithful point-cloud priors; I injects local surface statistics to seed anisotropic Gaussians, thereby reducing early conditioning gaps; and T unrolls alpha compositing with cached intermediates and index-mapped gradient scattering for stable mobile backpropagation. Collectively, these operators satisfy the competing requirements of training efficiency, memory compactness, and modeling fidelity. Extensive experiments demonstrate that PocketGS is able to outperform the powerful mainstream workstation 3DGS baseline to deliver high-quality reconstructions, enabling a fully on-device, practical capture-to-rendering workflow.

Wenzhi Guo, Guangchi Fang, Shu Yang, Bing Wang• 2026

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisLLFF (test)
PSNR23.54
79
Novel View SynthesisNeRF Synthetic (test)
PSNR24.32
36
Novel View SynthesisMobileScan (test)
PSNR23.67
3
Showing 3 of 3 rows

Other info

Follow for update