Robustifying Fourier Features Embeddings for Implicit Neural Representations
About
Implicit Neural Representations (INRs) employ neural networks to represent continuous functions by mapping coordinates to the corresponding values of the target function, with applications e.g., inverse graphics. However, INRs face a challenge known as spectral bias when dealing with scenes containing varying frequencies. To overcome spectral bias, the most common approach is the Fourier features-based methods such as positional encoding. However, Fourier features-based methods will introduce noise to output, which degrades their performances when applied to downstream tasks. In response, this paper initially hypothesizes that combining multi-layer perceptrons (MLPs) with Fourier feature embeddings mutually enhances their strengths, yet simultaneously introduces limitations inherent in Fourier feature embeddings. By presenting a simple theorem, we validate our hypothesis, which serves as a foundation for the design of our solution. Leveraging these insights, we propose the use of multi-layer perceptrons (MLPs) without additive
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Representation | Kodak (test) | PSNR32.96 | 13 | |
| 2D Image Representation | DIV2K LR mild track (first 24 images) (val) | PSNR34.03 | 6 |