Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FastVLM: Efficient Vision Encoding for Vision Language Models

About

Scaling the input image resolution is essential for enhancing the performance of Vision Language Models (VLMs), particularly in text-rich image understanding tasks. However, popular visual encoders such as ViTs become inefficient at high resolutions due to the large number of tokens and high encoding latency caused by stacked self-attention layers. At different operational resolutions, the vision encoder of a VLM can be optimized along two axes: reducing encoding latency and minimizing the number of visual tokens passed to the LLM, thereby lowering overall latency. Based on a comprehensive efficiency analysis of the interplay between image resolution, vision latency, token count, and LLM size, we introduce FastVLM, a model that achieves an optimized trade-off between latency, model size and accuracy. FastVLM incorporates FastViTHD, a novel hybrid vision encoder designed to output fewer tokens and significantly reduce encoding time for high-resolution images. Unlike previous methods, FastVLM achieves the optimal balance between visual token count and image resolution solely by scaling the input image, eliminating the need for additional token pruning and simplifying the model design. In the LLaVA-1.5 setup, FastVLM achieves 3.2$\times$ improvement in time-to-first-token (TTFT) while maintaining similar performance on VLM benchmarks compared to prior works. Compared to LLaVa-OneVision at the highest resolution (1152$\times$1152), FastVLM achieves better performance on key benchmarks like SeedBench, MMMU and DocVQA, using the same 0.5B LLM, but with 85$\times$ faster TTFT and a vision encoder that is 3.4$\times$ smaller. Code and models are available at https://github.com/apple/ml-fastvlm.

Pavan Kumar Anasosalu Vasu, Fartash Faghri, Chun-Liang Li, Cem Koc, Nate True, Albert Antony, Gokul Santhanam, James Gabriel, Peter Grasch, Oncel Tuzel, Hadi Pouransari• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringGQA
Accuracy62.7
963
Object Hallucination EvaluationPOPE
Accuracy87.2
935
Text-based Visual Question AnsweringTextVQA
Accuracy77.1
496
Multimodal UnderstandingMM-Vet
MM-Vet Score37.5
418
OCR EvaluationOCRBench
Score67.3
296
Multi-discipline Multimodal UnderstandingMMMU--
266
Visual Question AnsweringChartQA
Accuracy71.6
239
Science Question AnsweringScienceQA
Accuracy92.9
229
Chart Question AnsweringChartQA
Accuracy82.4
229
Multimodal UnderstandingSEED-Bench
Accuracy72.6
203
Showing 10 of 28 rows

Other info

Code

Follow for update