Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

When Do We Not Need Larger Vision Models?

About

Scaling up the size of vision models has been the de facto standard to obtain more powerful visual representations. In this work, we discuss the point beyond which larger vision models are not necessary. First, we demonstrate the power of Scaling on Scales (S$^2$), whereby a pre-trained and frozen smaller vision model (e.g., ViT-B or ViT-L), run over multiple image scales, can outperform larger models (e.g., ViT-H or ViT-G) on classification, segmentation, depth estimation, Multimodal LLM (MLLM) benchmarks, and robotic manipulation. Notably, S$^2$ achieves state-of-the-art performance in detailed understanding of MLLM on the V* benchmark, surpassing models such as GPT-4V. We examine the conditions under which S$^2$ is a preferred scaling approach compared to scaling on model size. While larger models have the advantage of better generalization on hard examples, we show that features of larger vision models can be well approximated by those of multi-scale smaller models. This suggests most, if not all, of the representations learned by current large pre-trained models can also be obtained from multi-scale smaller models. Our results show that a multi-scale smaller model has comparable learning capacity to a larger model, and pre-training smaller models with S$^2$ can match or even exceed the advantage of larger models. We release a Python package that can apply S$^2$ on any vision model with one line of code: https://github.com/bfshi/scaling_on_scales.

Baifeng Shi, Ziyang Wu, Maolin Mao, Xin Wang, Trevor Darrell• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy56
1525
Visual Question AnsweringTextVQA
Accuracy63.1
1285
Visual Question AnsweringGQA
Accuracy63.2
1249
Multimodal UnderstandingMMBench--
637
Multimodal UnderstandingMM-Vet
MM-Vet Score35.4
531
Text-based Visual Question AnsweringTextVQA (val)
Accuracy54.5
262
Visual Question AnsweringVQAv2
Accuracy80.9
177
Chart Question AnsweringChartQA (test)
Accuracy20.3
176
Document Visual Question AnsweringDocVQA (val)
Accuracy30.7
157
Hallucination EvaluationPOPE
Accuracy87.4
153
Showing 10 of 17 rows

Other info

Follow for update