Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Infrared and Visible Image Fusion with Language-Driven Loss in CLIP Embedding Space

About

Infrared-visible image fusion (IVIF) has attracted much attention owing to the highly-complementary properties of the two image modalities. Due to the lack of ground-truth fused images, the fusion output of current deep-learning based methods heavily depends on the loss functions defined mathematically. As it is hard to well mathematically define the fused image without ground truth, the performance of existing fusion methods is limited. In this paper, we propose to use natural language to express the objective of IVIF, which can avoid the explicit mathematical modeling of fusion output in current losses, and make full use of the advantage of language expression to improve the fusion performance. For this purpose, we present a comprehensive language-expressed fusion objective, and encode relevant texts into the multi-modal embedding space using CLIP. A language-driven fusion model is then constructed in the embedding space, by establishing the relationship among the embedded vectors representing the fusion objective and input image modalities. Finally, a language-driven loss is derived to make the actual IVIF aligned with the embedded language-driven fusion model via supervised training. Experiments show that our method can obtain much better fusion results than existing techniques. The code is available at https://github.com/wyhlaowang/LDFusion.

Yuhao Wang, Lingjuan Miao, Zhiqiang Zhou, Lei Zhang, Yajun Qiao• 2024

Related benchmarks

TaskDatasetResultRank
Infrared and Visible Image FusionRocket 1
AG2.777
10
Infrared and Visible Image FusionRocket 2
AG (Average Gradient)3.747
10
Infrared and Visible Image FusionPublic
AG5.245
10
Showing 3 of 3 rows

Other info

Follow for update