Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diverse Image Inpainting with Bidirectional and Autoregressive Transformers

About

Image inpainting is an underdetermined inverse problem, which naturally allows diverse contents to fill up the missing or corrupted regions realistically. Prevalent approaches using convolutional neural networks (CNNs) can synthesize visually pleasant contents, but CNNs suffer from limited perception fields for capturing global features. With image-level attention, transformers enable to model long-range dependencies and generate diverse contents with autoregressive modeling of pixel-sequence distributions. However, the unidirectional attention in autoregressive transformers is suboptimal as corrupted image regions may have arbitrary shapes with contexts from any direction. We propose BAT-Fill, an innovative image inpainting framework that introduces a novel bidirectional autoregressive transformer (BAT) for image inpainting. BAT utilizes the transformers to learn autoregressive distributions, which naturally allows the diverse generation of missing contents. In addition, it incorporates the masked language model like BERT, which enables bidirectionally modeling of contextual information of missing regions for better image completion. Extensive experiments over multiple datasets show that BAT-Fill achieves superior diversity and fidelity in image inpainting qualitatively and quantitatively.

Yingchen Yu, Fangneng Zhan, Rongliang Wu, Jianxiong Pan, Kaiwen Cui, Shijian Lu, Feiying Ma, Xuansong Xie, Chunyan Miao• 2021

Related benchmarks

TaskDatasetResultRank
Image InpaintingCelebA with irregular mask 0-20% mask ratio
PSNR34.63
8
Image InpaintingCelebA with irregular mask 20-40% mask ratio
PSNR26.91
8
Image InpaintingCelebA with irregular mask 40-60% mask ratio
PSNR22.26
8
Showing 3 of 3 rows

Other info

Follow for update