Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

FTPFusion: Frequency-Aware Infrared and Visible Video Fusion with Temporal Perturbation

About

Infrared and visible video fusion plays a critical role in intelligent surveillance and low-light monitoring. However, maintaining temporal stability while preserving spatial detail remains a fundamental challenge. Existing methods either focus on frame-wise enhancement with limited temporal modeling or rely on heavy spatio-temporal aggregation that often sacrifices high-frequency details. In this paper, we propose FTPFusion, a frequency-aware infrared and visible video fusion method based on temporal perturbation and sparse cross-modal interaction. Specifically, FTPFusion decomposes the feature representations into high-frequency and low-frequency components for collaborative modeling. The high-frequency branch performs sparse cross-modal spatio-temporal interaction to capture motion-related context and complementary details. The low-frequency branch introduces a temporal perturbation strategy to enhance robustness against complex video variations, such as flickering, jitter, and local misalignment. Furthermore, we design an offset-aware temporal consistency constraint to explicitly stabilize cross-frame representations under temporal disturbances. Extensive experiments on multiple public benchmarks demonstrate that FTPFusion consistently outperforms state-of-the-art methods across multiple metrics in both spatial fidelity and temporal consistency. The source code will be available at https://github.com/ixilai/FTPFusion.

Xilai Li, Chusheng Fang, Xiaosong Li• 2026

Related benchmarks

TaskDatasetResultRank
Infrared and Visible Video FusionM3SVD
QMI58.16
8
Infrared and Visible Video FusionHDO
QMI0.4859
8
Infrared and Visible Video FusionVTMOT
QMI0.5316
8
Showing 3 of 3 rows

Other info

Follow for update