SASNet: Spatially-Adaptive Sinusoidal Networks for INRs
About
Sinusoidal neural networks (SIRENs) are powerful implicit neural representations (INRs) for low-dimensional signals in vision and graphics. By encoding input coordinates with sinusoidal functions, they enable high-frequency image and surface reconstruction. However, training SIRENs is often unstable and highly sensitive to frequency initialization: small frequencies produce overly smooth reconstructions in detailed regions, whereas large ones introduce spurious high-frequency components that manifest as noise in smooth areas such as image backgrounds. To address these challenges, we propose SASNet, a Spatially-Adaptive Sinusoidal Network that couples a frozen frequency embedding layer, which explicitly fixes the network's frequency support, with jointly learned spatial masks that localize neuron influence across the domain. This pairing stabilizes optimization, sharpens edges, and suppresses noise in smooth areas. Experiments on 2D image and 3D volumetric data fitting as well as signed distance field (SDF) reconstruction benchmarks demonstrate that SASNet achieves faster convergence, superior reconstruction quality, and robust frequency localization -- assigning low- and high-frequency neurons to smooth and detailed regions respectively -- while maintaining parameter efficiency. Code available here: https://github.com/Fengyee/SASNet_inr.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| SDF Reconstruction | Dragon | Chamfer Distance1.96e-6 | 6 | |
| SDF Reconstruction | Lucy | Chamfer Distance1.93e-6 | 6 | |
| SDF Reconstruction | Thai Statue | Chamfer Distance3.09e-6 | 6 | |
| Volumetric Data Reconstruction | ScalarFlow | PSNR55.92 | 6 | |
| SDF Reconstruction | Armadillo | Chamfer Distance3.37e-6 | 6 |