Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Predicting Landsat Reflectance with Deep Generative Fusion

About

Public satellite missions are commonly bound to a trade-off between spatial and temporal resolution as no single sensor provides fine-grained acquisitions with frequent coverage. This hinders their potential to assist vegetation monitoring or humanitarian actions, which require detecting rapid and detailed terrestrial surface changes. In this work, we probe the potential of deep generative models to produce high-resolution optical imagery by fusing products with different spatial and temporal characteristics. We introduce a dataset of co-registered Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat surface reflectance time series and demonstrate the ability of our generative model to blend coarse daily reflectance information into low-paced finer acquisitions. We benchmark our proposed model against state-of-the-art reflectance fusion algorithms.

Shahine Bouabid, Maxim Chernetskiy, Maxime Rischard, Jevgenij Gamper• 2020

Related benchmarks

TaskDatasetResultRank
Past image generationTexas housing data (test)
SSIM0.5976
6
Future image generationTexas housing data (test)
SSIM0.422
6
Super-ResolutionfMoW-Sentinel2 crop field dataset
SSIM0.2057
6
Object CountingTexas housing dataset (test)
R2 (Buildings, Mean)0.8793
4
Image Generation QualityTexas housing dataset (test)
Selection Rate (Similarity)45
3
Showing 5 of 5 rows

Other info

Follow for update