Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

A Unified Foundation Model for All-in-One Multi-Modal Remote Sensing Image Restoration and Fusion with Language Prompting

About

Remote sensing imagery suffers from clouds, haze, noise, resolution limits, and sensor heterogeneity. Existing restoration and fusion approaches train separate models per degradation type. In this work, we present Language-conditioned Large-scale Remote Sensing restoration model (LLaRS), the first unified foundation model for multi-modal and multi-task remote sensing low-level vision. LLaRS employs Sinkhorn-Knopp optimal transport to align heterogeneous bands into semantically matched slots, routes features through three complementary mixture-of-experts layers (convolutional experts for spatial patterns, channel-mixing experts for spectral fidelity, and attention experts with low-rank adapters for global context), and stabilizes joint training via step-level dynamic weight adjustment. To train LLaRS, we construct LLaRS1M, a million-scale multi-task dataset spanning eleven restoration and enhancement tasks, integrating real paired observations and controlled synthetic degradations with diverse natural language prompts. Experiments show LLaRS consistently outperforms seven competitive models, and parameter-efficient finetuning experiments demonstrate strong transfer capability and adaptation efficiency on unseen data. Repo: https://github.com/yc-cui/LLaRS

Yongchuan Cui, Peng Liu• 2026

Related benchmarks

TaskDatasetResultRank
Perceptual Image RestorationAverage across datasets (combined)
PSNR37.6
35
Brightness EnhancementLLaRS-1M
PSNR64.27
8
Cloud RemovalLLaRS-1M
PSNR31.18
8
DeblurringLLaRS-1M
PSNR45.26
8
DehazingLLaRS-1M
PSNR22.81
8
DenoisingLLaRS-1M
PSNR43.84
8
DestripingLLaRS-1M
PSNR49.62
8
Haze RemovalLLaRS Haze Removal
SAM0.0442
8
Histogram Equalization ReversalLLaRS-1M
PSNR19.69
8
Linear Stretch ReversalLLaRS-1M
PSNR25.29
8
Showing 10 of 24 rows

Other info

Follow for update