Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Empowering Semantic-Sensitive Underwater Image Enhancement with VLM

About

In recent years, learning-based underwater image enhancement (UIE) techniques have rapidly evolved. However, distribution shifts between high-quality enhanced outputs and natural images can hinder semantic cue extraction for downstream vision tasks, thereby limiting the adaptability of existing enhancement models. To address this challenge, this work proposes a new learning mechanism that leverages Vision-Language Models (VLMs) to empower UIE models with semantic-sensitive capabilities. To be concrete, our strategy first generates textual descriptions of key objects from a degraded image via VLMs. Subsequently, a text-image alignment model remaps these relevant descriptions back onto the image to produce a spatial semantic guidance map. This map then steers the UIE network through a dual-guidance mechanism, which combines cross-attention and an explicit alignment loss. This forces the network to focus its restorative power on semantic-sensitive regions during image reconstruction, rather than pursuing a globally uniform improvement, thereby ensuring the faithful restoration of key object features. Experiments confirm that when our strategy is applied to different UIE baselines, significantly boosts their performance on perceptual quality metrics as well as enhances their performance on detection and segmentation tasks, validating its effectiveness and adaptability.

Guodong Fan, Shengning Zhou, Genji Yuan, Huiyu Li, Jingchun Zhou, Jinjiang Li• 2026

Related benchmarks

TaskDatasetResultRank
Underwater Image EnhancementU45
UCIQE0.449
33
Underwater Image EnhancementChallenge
UCIQE0.437
23
Object DetectionUIEB (test)
Plastic-AP98.59
11
Semantic segmentationUIEB (test)
RO51.52
11
Underwater Image EnhancementUIEB
PSNR24.97
10
Showing 5 of 5 rows

Other info

Follow for update