Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SkySense: A Multi-Modal Remote Sensing Foundation Model Towards Universal Interpretation for Earth Observation Imagery

About

Prior studies on Remote Sensing Foundation Model (RSFM) reveal immense potential towards a generic model for Earth Observation. Nevertheless, these works primarily focus on a single modality without temporal and geo-context modeling, hampering their capabilities for diverse tasks. In this study, we present SkySense, a generic billion-scale model, pre-trained on a curated multi-modal Remote Sensing Imagery (RSI) dataset with 21.5 million temporal sequences. SkySense incorporates a factorized multi-modal spatiotemporal encoder taking temporal sequences of optical and Synthetic Aperture Radar (SAR) data as input. This encoder is pre-trained by our proposed Multi-Granularity Contrastive Learning to learn representations across different modal and spatial granularities. To further enhance the RSI representations by the geo-context clue, we introduce Geo-Context Prototype Learning to learn region-aware prototypes upon RSI's multi-modal spatiotemporal features. To our best knowledge, SkySense is the largest Multi-Modal RSFM to date, whose modules can be flexibly combined or used individually to accommodate various tasks. It demonstrates remarkable generalization capabilities on a thorough evaluation encompassing 16 datasets over 7 tasks, from single- to multi-modal, static to temporal, and classification to localization. SkySense surpasses 18 recent RSFMs in all test scenarios. Specifically, it outperforms the latest models such as GFM, SatLas and Scale-MAE by a large margin, i.e., 2.76%, 3.67% and 3.61% on average respectively. We will release the pre-trained weights to facilitate future research and Earth Observation applications.

Xin Guo, Jiangwei Lao, Bo Dang, Yingying Zhang, Lei Yu, Lixiang Ru, Liheng Zhong, Ziyuan Huang, Kang Wu, Dingxiang Hu, Huimei He, Jian Wang, Jingdong Chen, Ming Yang, Yongjun Zhang, Yansheng Li• 2023

Related benchmarks

TaskDatasetResultRank
Change DetectionLEVIR-CD
F1 Score92.58
188
Semantic segmentationiSAID
mIoU70.91
68
Change DetectionLEVIR
F1 Score92.58
62
Object DetectionDIOR
mAP5078.73
50
ClassificationAID (test)
Top-1 Accuracy98.6
41
Rotated Object DetectionDIOR-R extended (test)
mAP74.27
28
Change DetectionOSCD
F1 Score60.06
26
Scene ClassificationRESISC-45 (test)
OA96.32
26
Object DetectionDIOR-R
mAP74.27
21
Scene ClassificationRESISC-45 Standard (20% train and 80% test)
OA96.32
21
Showing 10 of 29 rows

Other info

Follow for update