Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ELMGS: Enhancing memory and computation scaLability through coMpression for 3D Gaussian Splatting

About

3D models have recently been popularized by the potentiality of end-to-end training offered first by Neural Radiance Fields and most recently by 3D Gaussian Splatting models. The latter has the big advantage of naturally providing fast training convergence and high editability. However, as the research around these is still in its infancy, there is still a gap in the literature regarding the model's scalability. In this work, we propose an approach enabling both memory and computation scalability of such models. More specifically, we propose an iterative pruning strategy that removes redundant information encoded in the model. We also enhance compressibility for the model by including in the optimization strategy a differentiable quantization and entropy coding estimator. Our results on popular benchmarks showcase the effectiveness of the proposed approach and open the road to the broad deployability of such a solution even on resource-constrained devices.

Muhammad Salman Ali, Sung-Ho Bae, Enzo Tartaglione• 2024

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisMip-NeRF 360 (test)
PSNR27
166
Novel View SynthesisTanks&Temples
SSIM82.5
39
Novel View SynthesisDeep Blending average across all scenes
PSNR29.24
12
Showing 3 of 3 rows

Other info

Follow for update