Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SIMPLER: Efficient Foundation Model Adaptation via Similarity-Guided Layer Pruning for Earth Observation

About

Fine-tuning foundation models for Earth Observation is computationally expensive, with high training time and memory demands for both training and deployment. Parameter-efficient methods reduce training cost but retain full inference complexity, while post-hoc compression optimizes inference only after costly full fine-tuning. We introduce SIMPLER, a pre-fine-tuning architecture selection method that reduces inference and deployment costs by identifying an effective model depth before adaptation. SIMPLER exploits stabilization of representations in deeper layers of pre-trained vision transformers: it computes layer-wise representation similarity on unlabeled task data and applies an automated scoring function to select redundant layers, with no gradients, magnitude heuristics, or hyperparameter tuning required. On Prithvi-EO-2, SIMPLER prunes up to 79% of parameters while retaining 94% of baseline performance, yielding a 2.1x training speedup and 2.6x inference speedup. The method generalizes to TerraMind (a multimodal EO foundation model) and ImageNet-pretrained ViT-MAE, demonstrating applicability across tasks, architectures, and spectral modalities. Code is available at https://gitlab.citius.gal/hpc4rs/simpler.

V\'ictor Barreiro, Johannes Jakubik, Francisco Arg\"uello, Dora B. Heras• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (val)
Accuracy72.8
776
Semantic segmentationMADOS
mIoU62.8
26
Multi-Label ClassificationBigEarthNet v2 (test)
mAP71.2
4
Time-series classificationSen4Map (test)
Overall Accuracy73.9
4
Showing 4 of 4 rows

Other info

Follow for update