Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Continuous Image Representation with Local Implicit Image Function

About

How to represent an image? While the visual world is presented in a continuous manner, machines store and see the images in a discrete way with 2D arrays of pixels. In this paper, we seek to learn a continuous representation for images. Inspired by the recent progress in 3D reconstruction with implicit neural representation, we propose Local Implicit Image Function (LIIF), which takes an image coordinate and the 2D deep features around the coordinate as inputs, predicts the RGB value at a given coordinate as an output. Since the coordinates are continuous, LIIF can be presented in arbitrary resolution. To generate the continuous representation for images, we train an encoder with LIIF representation via a self-supervised task with super-resolution. The learned continuous representation can be presented in arbitrary resolution even extrapolate to x30 higher resolution, where the training tasks are not provided. We further show that LIIF representation builds a bridge between discrete and continuous representation in 2D, it naturally supports the learning tasks with size-varied image ground-truths and significantly outperforms the method with resizing the ground-truths.

Yinbo Chen, Sifei Liu, Xiaolong Wang• 2020

Related benchmarks

TaskDatasetResultRank
Super-ResolutionSet5
PSNR41.23
751
Image Super-resolutionManga109
PSNR42.84
656
Super-ResolutionUrban100
PSNR36.7
603
Image Super-resolutionSet5 (test)--
544
Image Super-resolutionSet5
PSNR38.02
507
Super-ResolutionB100 (test)
PSNR32.39
363
Super-ResolutionSet14 (test)
PSNR34.14
246
Image Super-resolutionUrban100
PSNR32.52
221
Image Super-resolutionBSD100
PSNR (dB)35.76
210
Super-ResolutionUrban100 (test)--
205
Showing 10 of 44 rows

Other info

Code

Follow for update