Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models

About

Large Multimodal Models (LMMs) have shown promise in vision-language tasks but struggle with high-resolution input and detailed scene understanding. Addressing these challenges, we introduce Monkey to enhance LMM capabilities. Firstly, Monkey processes input images by dividing them into uniform patches, each matching the size (e.g., 448x448) used in the original training of the well-trained vision encoder. Equipped with individual adapter for each patch, Monkey can handle higher resolutions up to 1344x896 pixels, enabling the detailed capture of complex visual information. Secondly, it employs a multi-level description generation method, enriching the context for scene-object associations. This two-part strategy ensures more effective learning from generated data: the higher resolution allows for a more detailed capture of visuals, which in turn enhances the effectiveness of comprehensive descriptions. Extensive ablative results validate the effectiveness of our designs. Additionally, experiments on 18 datasets further demonstrate that Monkey surpasses existing LMMs in many tasks like Image Captioning and various Visual Question Answering formats. Specially, in qualitative tests focused on dense text question answering, Monkey has exhibited encouraging results compared with GPT4V. Code is available at https://github.com/Yuliang-Liu/Monkey.

Zhang Li, Biao Yang, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu, Xiang Bai• 2023

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy80.3
1165
Visual Question AnsweringTextVQA
Accuracy67.6
1117
Visual Question AnsweringVizWiz
Accuracy61.2
1043
Visual Question AnsweringGQA
Accuracy60.7
963
Object Hallucination EvaluationPOPE
Accuracy67.6
935
Multimodal EvaluationMME
Score1.92e+3
557
Text-based Visual Question AnsweringTextVQA
Accuracy64.3
496
Multimodal UnderstandingMM-Vet
MM-Vet Score33
418
Multimodal UnderstandingMMBench
Accuracy72.4
367
Mathematical ReasoningMathVista
Score34.8
322
Showing 10 of 87 rows
...

Other info

Code

Follow for update