Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FusionMamba: Dynamic Feature Enhancement for Multimodal Image Fusion with Mamba

About

Multimodal image fusion aims to integrate information from different imaging techniques to produce a comprehensive, detail-rich single image for downstream vision tasks. Existing methods based on local convolutional neural networks (CNNs) struggle to capture global features efficiently, while Transformer-based models are computationally expensive, although they excel at global modeling. Mamba addresses these limitations by leveraging selective structured state space models (S4) to effectively handle long-range dependencies while maintaining linear complexity. In this paper, we propose FusionMamba, a novel dynamic feature enhancement framework that aims to overcome the challenges faced by CNNs and Vision Transformers (ViTs) in computer vision tasks. The framework improves the visual state-space model Mamba by integrating dynamic convolution and channel attention mechanisms, which not only retains its powerful global feature modeling capability, but also greatly reduces redundancy and enhances the expressiveness of local features. In addition, we have developed a new module called the dynamic feature fusion module (DFFM). It combines the dynamic feature enhancement module (DFEM) for texture enhancement and disparity perception with the cross-modal fusion Mamba module (CMFM), which focuses on enhancing the inter-modal correlation while suppressing redundant information. Experiments show that FusionMamba achieves state-of-the-art performance in a variety of multimodal image fusion tasks as well as downstream experiments, demonstrating its broad applicability and superiority.

Xinyu Xie, Yawen Cui, Tao Tan, Xubin Zheng, Zitong Yu• 2024

Related benchmarks

TaskDatasetResultRank
Object DetectionFLIR (test)
mAP500.849
83
Visible-Infrared Image FusionMSRS (test)
Average Gradient (AG)3.658
43
Semantic segmentationMSRS
mIoU67.32
42
Infrared-Visible Image FusionRoadScene (test)
Average Gradient (AG)5.711
40
Object DetectionM³FD (test)
mAP@0.5 (Full)83.16
34
Object DetectionMSRS (test)
mAP@0.596.6
34
Multi-Modal Image FusionMRI-CT (test)
EN4.4
30
Infrared-Visible Image FusionFMB (test)
Entropy (EN)6.84
16
Semantic segmentationMSRS (test)
Background Score98.5
16
Multi-Modal Image FusionMRI-SPECT (test)
Entropy (EN)4.79
16
Showing 10 of 14 rows

Other info

Follow for update