Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Multi-interactive Feature Learning and a Full-time Multi-modality Benchmark for Image Fusion and Segmentation

About

Multi-modality image fusion and segmentation play a vital role in autonomous driving and robotic operation. Early efforts focus on boosting the performance for only one task, \emph{e.g.,} fusion or segmentation, making it hard to reach~`Best of Both Worlds'. To overcome this issue, in this paper, we propose a \textbf{M}ulti-\textbf{i}nteractive \textbf{F}eature learning architecture for image fusion and \textbf{Seg}mentation, namely SegMiF, and exploit dual-task correlation to promote the performance of both tasks. The SegMiF is of a cascade structure, containing a fusion sub-network and a commonly used segmentation sub-network. By slickly bridging intermediate features between two components, the knowledge learned from the segmentation task can effectively assist the fusion task. Also, the benefited fusion network supports the segmentation one to perform more pretentiously. Besides, a hierarchical interactive attention block is established to ensure fine-grained mapping of all the vital information between two tasks, so that the modality/semantic features can be fully mutual-interactive. In addition, a dynamic weight factor is introduced to automatically adjust the corresponding weights of each task, which can balance the interactive feature correspondence and break through the limitation of laborious tuning. Furthermore, we construct a smart multi-wave binocular imaging system and collect a full-time multi-modality benchmark with 15 annotated pixel-level categories for image fusion and segmentation. Extensive experiments on several public datasets and our benchmark demonstrate that the proposed method outputs visually appealing fused images and perform averagely $7.66\%$ higher segmentation mIoU in the real-world scene than the state-of-the-art approaches. The source code and benchmark are available at \url{https://github.com/JinyuanLiu-CV/SegMiF}.

Jinyuan Liu, Zhu Liu, Guanyao Wu, Long Ma, Risheng Liu, Wei Zhong, Zhongxuan Luo, Xin Fan• 2023

Related benchmarks

TaskDatasetResultRank
Semantic segmentationMFNet (test)
mIoU62.28
168
Object DetectionCOCO--
137
Object DetectionLLVIP
mAP5093.95
104
Semantic segmentationFMB (test)
mIoU57.58
100
Semantic segmentationMSRS
mIoU74.25
68
Infrared-Visible Image FusionRoadScene (test)--
53
Salient Object DetectionVT5000--
50
Semantic segmentationFMB
mIoU0.5915
49
Infrared and Visible Image FusionRoadScene
Qabf0.53
42
Infrared-Visible Image FusionMSRS
QAB/F (Quality Assessment Block/Fusion)0.63
38
Showing 10 of 41 rows

Other info

Code

Follow for update