Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Transformers as Meta-Learners for Implicit Neural Representations

About

Implicit Neural Representations (INRs) have emerged and shown their benefits over discrete representations in recent years. However, fitting an INR to the given observations usually requires optimization with gradient descent from scratch, which is inefficient and does not generalize well with sparse observations. To address this problem, most of the prior works train a hypernetwork that generates a single vector to modulate the INR weights, where the single vector becomes an information bottleneck that limits the reconstruction precision of the output INR. Recent work shows that the whole set of weights in INR can be precisely inferred without the single-vector bottleneck by gradient-based meta-learning. Motivated by a generalized formulation of gradient-based meta-learning, we propose a formulation that uses Transformers as hypernetworks for INRs, where it can directly build the whole set of INR weights with Transformers specialized as set-to-set mapping. We demonstrate the effectiveness of our method for building INRs in different tasks and domains, including 2D image regression and view synthesis for 3D objects. Our work draws connections between the Transformer hypernetworks and gradient-based meta-learning algorithms and we provide further analysis for understanding the generated INRs.

Yinbo Chen, Xiaolong Wang• 2022

Related benchmarks

TaskDatasetResultRank
Image ReconstructionImageNet 256x256--
150
Image ReconstructionFFHQ (test)
PSNR33.66
36
Novel View SynthesisShapeNet cars category
PSNR23.78
20
Image ReconstructionCelebA (test)--
15
Image fittingOASIS-MRI
PSNR55.5
13
Image fittingAFHQ
PSNR49
12
Image fittingCelebA-HQ
PSNR51.9
10
Image ReconstructionCelebA 178 x 178
PSNR32.37
9
Image ReconstructionImagenette 178 x 178
PSNR29.01
9
Image fittingAFHQ OOD
PSNR49.01
8
Showing 10 of 29 rows

Other info

Follow for update