Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Cross-Modality Attentive Feature Fusion for Object Detection in Multispectral Remote Sensing Imagery

About

Cross-modality fusing complementary information of multispectral remote sensing image pairs can improve the perception ability of detection algorithms, making them more robust and reliable for a wider range of applications, such as nighttime detection. Compared with prior methods, we think different features should be processed specifically, the modality-specific features should be retained and enhanced, while the modality-shared features should be cherry-picked from the RGB and thermal IR modalities. Following this idea, a novel and lightweight multispectral feature fusion approach with joint common-modality and differential-modality attentions are proposed, named Cross-Modality Attentive Feature Fusion (CMAFF). Given the intermediate feature maps of RGB and IR images, our module parallel infers attention maps from two separate modalities, common- and differential-modality, then the attention maps are multiplied to the input feature map respectively for adaptive feature enhancement or selection. Extensive experiments demonstrate that our proposed approach can achieve the state-of-the-art performance at a low computation cost.

Qingyun Fang, Zhaokui Wang• 2021

Related benchmarks

TaskDatasetResultRank
Object DetectionLLVIP
mAP5095.4
104
Object DetectionFLIR (test)
mAP500.766
94
Object DetectionFLIR--
59
Object DetectionM4-SAR
AP50 (Brightness)75.8
39
Object DetectionVEDAI (test)
mAP@0.5074.8
19
Oriented Object DetectionOGSOD 2.0
AP@5090.8
9
Oriented Object DetectionOGSOD 1.0
AP5092.9
9
Oriented Object DetectionVEDAI (test)
mAP5075.9
8
Multi-category object detectionFLIR RGB + IR (test)
AP5077.7
4
Showing 9 of 9 rows

Other info

Follow for update