Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Mobile-VideoGPT: Fast and Accurate Model for Mobile Video Understanding

About

Video understanding models often struggle with high computational requirements, extensive parameter counts, and slow inference speed, making them inefficient for practical use. To tackle these challenges, we propose Mobile-VideoGPT, an efficient multimodal framework designed to operate with fewer than a billion parameters. Unlike traditional video large multimodal models (LMMs), Mobile-VideoGPT consists of lightweight dual visual encoders, efficient projectors, and a small language model (SLM), enabling real-time throughput. To further improve efficiency, we present an Attention-Based Frame Scoring mechanism to select the key-frames, along with an efficient token projector that prunes redundant visual tokens and preserves essential contextual cues. We evaluate our model across well-established six video understanding benchmarks (e.g., MVBench, EgoSchema, NextQA, and PercepTest). Our results show that Mobile-VideoGPT-0.5B can generate up to 46 tokens per second while outperforming existing state-of-the-art 0.5B-parameter models by 6 points on average with 40% fewer parameters and more than 2x higher throughput. Our code and models are publicly available at: https://github.com/Amshaker/Mobile-VideoGPT.

Abdelrahman Shaker, Muhammad Maaz, Chenhui Gou, Hamid Rezatofighi, Salman Khan, Fahad Shahbaz Khan• 2025

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringActivityNet-QA (test)
Accuracy54.4
288
Long Video UnderstandingMLVU--
154
Video UnderstandingMVBench (test)
Accuracy53.6
151
Video Question AnsweringNExT-QA Multi-choice
Accuracy73.7
114
Video UnderstandingEgoSchema (test)
Accuracy36.7
55
Video UnderstandingPerceptionTest (val)
Accuracy65.3
12
Showing 6 of 6 rows

Other info

Follow for update