Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Understanding NTK Variance in Implicit Neural Representations

About

Implicit Neural Representations (INRs) often converge slowly and struggle to recover high-frequency details due to spectral bias. While prior work links this behavior to the Neural Tangent Kernel (NTK), how specific architectural choices affect NTK conditioning remains unclear. We show that many INR mechanisms can be understood through their impact on a small set of pairwise similarity factors and scaling terms that jointly determine NTK eigenvalue variance. For standard coordinate MLPs, limited input-feature interactions induce large eigenvalue dispersion and poor conditioning. We derive closed-form variance decompositions for common INR components and show that positional encoding reshapes input similarity, spherical normalization reduces variance via layerwise scaling, and Hadamard modulation introduces additional similarity factors strictly below one, yielding multiplicative variance reduction. This unified view explains how diverse INR architectures mitigate spectral bias by improving NTK conditioning. Experiments across multiple tasks confirm the predicted variance reductions and demonstrate faster, more stable convergence with improved reconstruction quality.

Chengguang Ou, Yixin Zhuang• 2025

Related benchmarks

TaskDatasetResultRank
Super-ResolutionDIV2K
PSNR27.92
101
Image ReconstructionDIV2K
PSNR36.12
20
Image ReconstructionText
PSNR51.71
5
Showing 3 of 3 rows

Other info

Follow for update