Cross-Modality Attentive Feature Fusion for Object Detection in Multispectral Remote Sensing Imagery
About
Cross-modality fusing complementary information of multispectral remote sensing image pairs can improve the perception ability of detection algorithms, making them more robust and reliable for a wider range of applications, such as nighttime detection. Compared with prior methods, we think different features should be processed specifically, the modality-specific features should be retained and enhanced, while the modality-shared features should be cherry-picked from the RGB and thermal IR modalities. Following this idea, a novel and lightweight multispectral feature fusion approach with joint common-modality and differential-modality attentions are proposed, named Cross-Modality Attentive Feature Fusion (CMAFF). Given the intermediate feature maps of RGB and IR images, our module parallel infers attention maps from two separate modalities, common- and differential-modality, then the attention maps are multiplied to the input feature map respectively for adaptive feature enhancement or selection. Extensive experiments demonstrate that our proposed approach can achieve the state-of-the-art performance at a low computation cost.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Detection | LLVIP | mAP5095.4 | 104 | |
| Object Detection | FLIR (test) | mAP500.766 | 94 | |
| Object Detection | FLIR | -- | 59 | |
| Object Detection | M4-SAR | AP50 (Brightness)75.8 | 39 | |
| Object Detection | VEDAI (test) | mAP@0.5074.8 | 19 | |
| Oriented Object Detection | OGSOD 2.0 | AP@5090.8 | 9 | |
| Oriented Object Detection | OGSOD 1.0 | AP5092.9 | 9 | |
| Oriented Object Detection | VEDAI (test) | mAP5075.9 | 8 | |
| Multi-category object detection | FLIR RGB + IR (test) | AP5077.7 | 4 |