Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

How Can Objects Help Video-Language Understanding?

About

Do we still need to represent objects explicitly in multimodal large language models (MLLMs)? To one extreme, pre-trained encoders convert images into visual tokens, with which objects and spatiotemporal relationships may be implicitly modeled. To the other extreme, image captions by themselves provide strong empirical performances for understanding tasks, despite missing fine-grained spatiotemporal information. To answer this question, we introduce ObjectMLLM, a framework capable of leveraging arbitrary computer vision algorithm to extract and integrate structured visual representation. Through extensive evaluations on six video question answering benchmarks, we confirm that explicit integration of object-centric representation remains necessary. Surprisingly, we observe that the simple approach of quantizing the continuous, structured object information and representing them as plain text performs the best, offering a data-efficient approach to integrate other visual perception modules into MLLM design. Our code and models are released at https://github.com/brown-palm/ObjectMLLM.

Zitian Tang, Shijie Wang, Junho Cho, Jaewook Yoo, Chen Sun• 2025

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringIntentQA--
35
Video ReasoningSTAR
Score67.2
19
Video QANEXT-QA
Accuracy78.5
7
Video ReasoningPerception (test)
Accuracy66.6
5
Video ReasoningCLEVRER
Accuracy77.6
4
Showing 5 of 5 rows

Other info

Follow for update