SciLT: Long-Tailed Classification in Scientific Image Domains
About
Long-tailed recognition has benefited from foundation models and fine-tuning paradigms, yet existing studies and benchmarks are mainly confined to natural image domains, where pre-training and fine-tuning data share similar distributions. In contrast, scientific images exhibit distinct visual characteristics and supervision signals, raising questions about the effectiveness of fine-tuning foundation models in such settings. In this work, we investigate scientific long-tailed recognition under a purely visual and parameter-efficient fine-tuning (PEFT) paradigm. Experiments on three scientific benchmarks show that fine-tuning foundation models yields limited gains, and reveal that penultimate-layer features play an important role, particularly for tail classes. Motivated by these findings, we propose SciLT, a framework that exploits multi-level representations through adaptive feature fusion and dual-supervision learning. By jointly leveraging penultimate- and final-layer features, SciLT achieves balanced performance across head and tail classes. Extensive experiments demonstrate that SciLT consistently outperforms existing methods, establishing a strong and practical baseline for scientific long-tailed recognition and providing valuable guidance for adapting foundation models to scientific data with substantial domain shifts.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Long-tailed recognition | iNaturalist 2018 | -- | 12 | |
| Long-tailed classification | Places365-LT | -- | 8 | |
| Long-tailed classification | ImageNet LT | -- | 6 | |
| Long-tailed classification | ISIC | MEL67.8 | 3 | |
| Long-Tailed Image Classification | NIH-Chest | Accuracy (Many)33.5 | 3 | |
| Long-Tailed Image Classification | blood | Basophil Accuracy100 | 3 |