Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SL$^{2}$A-INR: Single-Layer Learnable Activation for Implicit Neural Representation

About

Implicit Neural Representation (INR), leveraging a neural network to transform coordinate input into corresponding attributes, has recently driven significant advances in several vision-related domains. However, the performance of INR is heavily influenced by the choice of the nonlinear activation function used in its multilayer perceptron (MLP) architecture. To date, multiple nonlinearities have been investigated, but current INRs still face limitations in capturing high-frequency components and diverse signal types. We show that these challenges can be alleviated by introducing a novel approach in INR architecture. Specifically, we propose SL$^{2}$A-INR, a hybrid network that combines a single-layer learnable activation function with an MLP that uses traditional ReLU activations. Our method performs superior across diverse tasks, including image representation, 3D shape reconstruction, and novel view synthesis. Through comprehensive experiments, SL$^{2}$A-INR sets new benchmarks in accuracy, quality, and robustness for INR. Our Code is publicly available on~\href{https://github.com/Iceage7/SL2A-INR}{\textcolor{magenta}{GitHub}}.

Moein Heidari, Reza Rezaeian, Reza Azad, Dorit Merhof, Hamid Soltanian-Zadeh, Ilker Hacihaliloglu• 2024

Related benchmarks

TaskDatasetResultRank
2D Image FittingDIV2K D2K0-D2K7
D2K0 Score36.22
7
3D Shape RepresentationStanford 3D Scanning Repository
IoU (Thai statue)99.87
6
Image fittingDIV2K
PSNR (0873)25.95
6
Showing 3 of 3 rows

Other info

Follow for update