Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MiniGPT4-Video: Advancing Multimodal LLMs for Video Understanding with Interleaved Visual-Textual Tokens

About

This paper introduces MiniGPT4-Video, a multimodal Large Language Model (LLM) designed specifically for video understanding. The model is capable of processing both temporal visual and textual data, making it adept at understanding the complexities of videos. Building upon the success of MiniGPT-v2, which excelled in translating visual features into the LLM space for single images and achieved impressive results on various image-text benchmarks, this paper extends the model's capabilities to process a sequence of frames, enabling it to comprehend videos. MiniGPT4-video does not only consider visual content but also incorporates textual conversations, allowing the model to effectively answer queries involving both visual and text components. The proposed model outperforms existing state-of-the-art methods, registering gains of 4.22%, 1.13%, 20.82%, and 13.1% on the MSVD, MSRVTT, TGIF, and TVQA benchmarks respectively. Our models and code have been made publicly available here https://vision-cair.github.io/MiniGPT4-video/

Kirolos Ataallah, Xiaoqian Shen, Eslam Abdelrahman, Essam Sleiman, Deyao Zhu, Jian Ding, Mohamed Elhoseiny• 2024

Related benchmarks

TaskDatasetResultRank
Video UnderstandingMVBench
Accuracy51.2
247
Long Video UnderstandingMLVU--
72
Visual DialogVisDial 1.0 (val)
MRR0.146
65
Video Question AnsweringMSVD-QA zero-shot (test)
Accuracy73.9
56
Video Question AnsweringMSRVTT-QA zero-shot (test)
Accuracy59.7
55
Video Question AnsweringActivityNet-QA zero-shot (test)
Accuracy46.3
55
Temporal Video UnderstandingTempCompass
Average Score51.5
52
Open-ended Question AnsweringActivityNet
Accuracy45.85
29
Video DialogueAVSD DSTC8 (test)
BLEU-45.8
24
Open-ended Question AnsweringMSVD
Accuracy73.92
22
Showing 10 of 27 rows

Other info

Follow for update