Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Perceptual Region-Driven Infrared-Visible Co-Fusion for Extreme Scene Enhancement

About

In photogrammetry, accurately fusing infrared (IR) and visible (VIS) spectra while preserving the geometric fidelity of visible features and incorporating thermal radiation is a significant challenge, particularly under extreme conditions. Existing methods often compromise visible imagery quality, impacting measurement accuracy. To solve this, we propose a region perception-based fusion framework that combines multi-exposure and multi-modal imaging using a spatially varying exposure (SVE) camera. This framework co-fuses multi-modal and multi-exposure data, overcoming single-exposure method limitations in extreme environments. The framework begins with region perception-based feature fusion to ensure precise multi-modal registration, followed by adaptive fusion with contrast enhancement. A structural similarity compensation mechanism, guided by regional saliency maps, optimizes IR-VIS spectral integration. Moreover, the framework adapts to single-exposure scenarios for robust fusion across different conditions. Experiments conducted on both synthetic and real-world data demonstrate superior image clarity and improved performance compared to state-of-the-art methods, as evidenced by both quantitative and visual evaluations.

Jing Tao, Yonghong Zong, Banglei Guan, Pengju Sun, Taihang Lei, Yang Shanga, Qifeng Yu• 2025

Related benchmarks

TaskDatasetResultRank
Infrared and Visible Image FusionRocket 2
AG (Average Gradient)5.581
10
Infrared and Visible Image FusionPublic
AG5.465
10
Infrared and Visible Image FusionRocket 1
AG2.584
10
Showing 3 of 3 rows

Other info

Follow for update