Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cross Aggregation Transformer for Image Restoration

About

Recently, Transformer architecture has been introduced into image restoration to replace convolution neural network (CNN) with surprising results. Considering the high computational complexity of Transformer with global attention, some methods use the local square window to limit the scope of self-attention. However, these methods lack direct interaction among different windows, which limits the establishment of long-range dependencies. To address the above issue, we propose a new image restoration model, Cross Aggregation Transformer (CAT). The core of our CAT is the Rectangle-Window Self-Attention (Rwin-SA), which utilizes horizontal and vertical rectangle window attention in different heads parallelly to expand the attention area and aggregate the features cross different windows. We also introduce the Axial-Shift operation for different window interactions. Furthermore, we propose the Locality Complementary Module to complement the self-attention mechanism, which incorporates the inductive bias of CNN (e.g., translation invariance and locality) into Transformer, enabling global-local coupling. Extensive experiments demonstrate that our CAT outperforms recent state-of-the-art methods on several image restoration applications. The code and models are available at https://github.com/zhengchen1999/CAT.

Zheng Chen, Yulun Zhang, Jinjin Gu, Yongbing Zhang, Linghe Kong, Xin Yuan• 2022

Related benchmarks

TaskDatasetResultRank
Super-ResolutionSet5
PSNR38.51
751
Image Super-resolutionManga109
PSNR40.1
656
Super-ResolutionUrban100
PSNR34.26
603
Super-ResolutionSet14
PSNR34.78
586
Image Super-resolutionSet5 (test)
PSNR38.51
544
Image Super-resolutionSet5
PSNR38.51
507
Single Image Super-ResolutionUrban100
PSNR34.26
500
Super-ResolutionB100
PSNR32.59
418
Image Super-resolutionSet14
PSNR34.78
329
Super-ResolutionBSD100
PSNR32.59
313
Showing 10 of 45 rows

Other info

Follow for update