Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Multi-Label Guided Soft Contrastive Learning for Efficient Earth Observation Pretraining

About

Self-supervised pretraining on large-scale satellite data has raised great interest in building Earth observation (EO) foundation models. However, many important resources beyond pure satellite imagery, such as land-cover-land-use products that provide free global semantic information, as well as vision foundation models that hold strong knowledge of the natural world, are not widely studied. In this work, we show these free additional resources not only help resolve common contrastive learning bottlenecks, but also significantly boost the efficiency and effectiveness of EO pretraining. Specifically, we first propose soft contrastive learning that optimizes cross-scene soft similarity based on land-cover-generated multi-label supervision, naturally solving the issue of multiple positive samples and too strict positive matching in complex scenes. Second, we revisit and explore cross-domain continual pretraining for both multispectral and SAR imagery, building efficient EO foundation models from strongest vision models such as DINOv2. Adapting simple weight-initialization and Siamese masking strategies into our soft contrastive learning framework, we demonstrate impressive continual pretraining performance even when the input modalities are not aligned. Without prohibitive training, we produce multispectral and SAR foundation models that achieve significantly better results in 10 out of 11 downstream tasks than most existing SOTA models. For example, our ResNet50/ViT-S achieve 84.8/85.0 linear probing mAP scores on BigEarthNet-10\% which are better than most existing ViT-L models; under the same setting, our ViT-B sets a new record of 86.8 in multispectral, and 82.5 in SAR, the latter even better than many multispectral models. Dataset and models are available at \url{https://github.com/zhu-xlab/softcon}.

Yi Wang, Conrad M Albrecht, Xiao Xiang Zhu• 2024

Related benchmarks

TaskDatasetResultRank
Change DetectionOSCD
F1 Score62.4
34
Field Boundary SegmentationFTW (test)
Pixel IoU52
19
Semantic segmentationLoveDA Cross-Style
mIoU41.01
16
Semantic segmentationFive-Billion-Pixels (Cross-Regional)
mIoU43.24
16
Semantic segmentationFLAIR Cross-Regional
mIoU53.21
16
Semantic segmentationOpenEarthMap (Cross-Continent)
mIoU39.63
16
Semantic segmentationFive-Billion-Pixels Cross-sensor
mIoU29.59
16
Semantic segmentationPotsdam&Vaihingen Cross Spectral Band
mIoU11.27
16
ClassificationEuroSAT
Overall Accuracy (OA)96.09
12
Semantic segmentationSegMunich
F1 Background89.77
8
Showing 10 of 12 rows

Other info

Follow for update