Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VI3NR: Variance Informed Initialization for Implicit Neural Representations

About

Implicit Neural Representations (INRs) are a versatile and powerful tool for encoding various forms of data, including images, videos, sound, and 3D shapes. A critical factor in the success of INRs is the initialization of the network, which can significantly impact the convergence and accuracy of the learned model. Unfortunately, commonly used neural network initializations are not widely applicable for many activation functions, especially those used by INRs. In this paper, we improve upon previous initialization methods by deriving an initialization that has stable variance across layers, and applies to any activation function. We show that this generalizes many previous initialization methods, and has even better stability for well studied activations. We also show that our initialization leads to improved results with INR activation functions in multiple signal modalities. Our approach is particularly effective for Gaussian INRs, where we demonstrate that the theory of our initialization matches with task performance in multiple experiments, allowing us to achieve improvements in image, audio, and 3D surface reconstruction.

Chamin Hewa Koneputugodage, Yizhak Ben-Shabat, Sameera Ramasinghe, Stephen Gould• 2025

Related benchmarks

TaskDatasetResultRank
Image ReconstructionKODIM
Mean PSNR79.73
7
3D SDF reconstructionBACON
Mean Error0.7252
7
Audio ReconstructionSIREN audio segments
Bach MSE (x1e-3)4.80e-4
5
Showing 3 of 3 rows

Other info

Code

Follow for update