Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

From Words to Wavelengths: VLMs for Few-Shot Multispectral Object Detection

About

Multispectral object detection is critical for safety-sensitive applications such as autonomous driving and surveillance, where robust perception under diverse illumination conditions is essential. However, the limited availability of annotated multispectral data severely restricts the training of deep detectors. In such data-scarce scenarios, textual class information can serve as a valuable source of semantic supervision. Motivated by the recent success of Vision-Language Models (VLMs) in computer vision, we explore their potential for few-shot multispectral object detection. Specifically, we adapt two representative VLM-based detectors, Grounding DINO and YOLO-World, to handle multispectral inputs and propose an effective mechanism to integrate text, visual and thermal modalities. Through extensive experiments on two popular multispectral image benchmarks, FLIR and M3FD, we demonstrate that VLM-based detectors not only excel in few-shot regimes, significantly outperforming specialized multispectral models trained with comparable data, but also achieve competitive or superior results under fully supervised settings. Our findings reveal that the semantic priors learned by large-scale VLMs effectively transfer to unseen spectral modalities, ofFering a powerful pathway toward data-efficient multispectral perception.

Manuel Nkegoum, Minh-Tan Pham, \'Elisa Fromont, Bruno Avignon, S\'ebastien Lef\`evre• 2025

Related benchmarks

TaskDatasetResultRank
Object DetectionFLIR (test)
mAP500.878
83
Object DetectionFLIR
mAP72.05
40
Object DetectionM3FD
mAP50 Person55.2
16
Object DetectionM3FD
AP (Person)55.2
16
Object DetectionM3FD
mAP55.3
9
Object DetectionFLIR 5-shot (test)
mAP5070.69
8
Object DetectionFLIR 10-shot (test)
mAP5071.15
8
Showing 7 of 7 rows

Other info

Follow for update