Radial M\"untz-Sz\'asz Networks: Neural Architectures with Learnable Power Bases for Multidimensional Singularities
About
Radial singular fields, such as $1/r$, $\log r$, and crack-tip profiles, are difficult to model for coordinate-separable neural architectures. We show that any $C^2$ function that is both radial and additively separable must be quadratic, establishing a fundamental obstruction for coordinate-wise power-law models. Motivated by this result, we introduce Radial M\"untz-Sz\'asz Networks (RMN), which represent fields as linear combinations of learnable radial powers $r^\mu$, including negative exponents, together with a limit-stable log-primitive for exact $\log r$ behavior. RMN admits closed-form spatial gradients and Laplacians, enabling physics-informed learning on punctured domains. Across ten 2D and 3D benchmarks, RMN achieves 1.5$\times$--51$\times$ lower RMSE than MLPs and 10$\times$--100$\times$ lower RMSE than SIREN while using 27 parameters, compared with 33,537 for MLPs and 8,577 for SIREN. We extend RMN to angular dependence (RMN-Angular) and to multiple sources with learnable centers (RMN-MC); when optimization converges, source-center recovery errors fall below $10^{-4}$. We also report controlled failures on smooth, strongly non-radial targets to delineate RMN's operating regime.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Function Approximation | 2D smooth | RMSE4.781 | 5 | |
| Solving 3D Poisson equation | 3D Poisson (val) | Best Rel L2 Error8.8 | 4 | |
| Function Approximation | 2D log r | -- | 4 | |
| Function Approximation | 2D r^1/2 | -- | 4 | |
| Function Approximation | 2D r^-1 | -- | 4 | |
| Function Approximation | 2D multi-power | -- | 4 | |
| Function Approximation | 3D Coulomb | -- | 4 | |
| Function Approximation | 2D crack-tip | -- | 2 | |
| Function Approximation | 2D 2-source | -- | 2 | |
| Function Approximation | 2D 3-source | -- | 2 |