Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GoDe: Gaussians on Demand for Progressive Level of Detail and Scalable Compression

About

Recent progress in compressing explicit radiance field representations, particularly 3D Gaussian Splatting, has substantially reduced memory consumption while improving real-time rendering performance. However, existing approaches remain inherently single-rate: each compression level requires a separately optimized model, yielding a set of fixed operating points rather than a truly scalable representation. This limits deployment in scenarios where memory, bandwidth, or computational budgets vary across devices or over time. We argue that scalability should be an intrinsic property of the representation. We show that trained explicit radiance models exhibit a structured distribution of information, which can be revealed using standard optimization signals available during training. In particular, aggregated gradient sensitivity provides a simple, model-agnostic criterion to organize primitives from coarse structure to finer refinements. Building on this, we introduce GoDe (Gaussians on Demand), a general framework for scalable compression and progressive level-of-detail control, instantiated for 3D Gaussian Splatting. Starting from a single trained model, GoDe reorganizes Gaussian primitives into a fixed progressive hierarchy supporting multiple rate-distortion operating points without retraining or per-level fine-tuning. A single quantization-aware fine-tuning stage ensures consistent behavior across all levels under low-precision storage. Extensive experiments on standard benchmarks and multiple 3D Gaussian Splatting backbones show that GoDe achieves rate-distortion performance comparable to state-of-the-art single-rate methods, while enabling truly scalable compression and adaptive rendering within a unified representation. Project page: https://gaussians-on-demand.github.io

Francesco Di Sario, Riccardo Renzulli, Marco Grangetto, Akihiro Sugimoto, Enzo Tartaglione• 2025

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisMip-NeRF360
PSNR27.87
45
Novel View SynthesisTanks&Temples
PSNR24.61
29
Novel View SynthesisDeep Blending
PSNR30.33
29
3D Scene Compression3D Scene Compression Performance
Encoding Time (s)2.1
21
Novel View SynthesisDeep Blending DrJohnson scene
PSNR29.28
11
Novel View SynthesisDeep Blending Playroom scene
PSNR30.29
11
Novel View SynthesisMipNeRF360 Bonsai
Model Size (MB)3.7
8
Novel View SynthesisMipNeRF360 Flowers
Size (MB)3.9
8
Showing 8 of 8 rows

Other info

Follow for update