Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MambaIRv2: Attentive State Space Restoration

About

The Mamba-based image restoration backbones have recently demonstrated significant potential in balancing global reception and computational efficiency. However, the inherent causal modeling limitation of Mamba, where each token depends solely on its predecessors in the scanned sequence, restricts the full utilization of pixels across the image and thus presents new challenges in image restoration. In this work, we propose MambaIRv2, which equips Mamba with the non-causal modeling ability similar to ViTs to reach the attentive state space restoration model. Specifically, the proposed attentive state-space equation allows to attend beyond the scanned sequence and facilitate image unfolding with just one single scan. Moreover, we further introduce a semantic-guided neighboring mechanism to encourage interaction between distant but similar pixels. Extensive experiments show our MambaIRv2 outperforms SRFormer by even 0.35dB PSNR for lightweight SR even with 9.3\% less parameters and suppresses HAT on classic SR by up to 0.29dB. Code is available at https://github.com/csguoh/MambaIR.

Hang Guo, Yong Guo, Yaohua Zha, Yulun Zhang, Wenbo Li, Tao Dai, Shu-Tao Xia, Yawei Li• 2024

Related benchmarks

TaskDatasetResultRank
Super-ResolutionSet5
PSNR38.26
751
Image Super-resolutionManga109
PSNR40.55
656
Super-ResolutionSet14
PSNR34.09
586
Image Super-resolutionSet5 (test)
PSNR38.65
544
Single Image Super-ResolutionUrban100
PSNR34.6
500
Super-ResolutionB100 (test)
PSNR32.62
363
Single Image Super-ResolutionSet5
PSNR38.26
352
Image Super-resolutionSet14
PSNR34.93
329
Super-ResolutionBSD100
PSNR32.36
313
Super-ResolutionManga109
PSNR39.35
298
Showing 10 of 53 rows

Other info

Code

Follow for update