Our new X account is live! Follow @wizwand_team for updates
Home
/
Benchmarks
Multimodal Evaluation on MLLM Suite Subset (GQA, MMB, MME, POPE) (various)
Loading...
60.84
GQA
Qwen2.5-VL-7B
53.82
55.6425
57.465
59.2875
Sep 29, 2025
GQA
MMB
MME
POPE
Average Score
Updated 4d ago
Evaluation Results
Method
Method
Links
GQA
MMB
MME
POPE
Average Score
Qwen2.5-VL-7B
Backbone=Qwen2.5-VL-7B...
2025.09
60.84
84.1
2,310
86.3
100
DivPrune
Backbone=Qwen2.5-VL-7B...
2025.09
60.05
79.55
2,173
83.42
96
ZOO-Prune
Backbone=Qwen2.5-VL-7B...
2025.09
58.81
80.6
2,201
84.17
96.2
VisionZip
Backbone=Qwen2.5-VL-7B...
2025.09
57.27
79.72
2,221
83.89
95.6
DivPrune
Backbone=Qwen2.5-VL-7B...
2025.09
55.49
76.03
2,054
79.05
90.5
ZOO-Prune
Backbone=Qwen2.5-VL-7B...
2025.09
55.45
76.28
2,018
80.99
90.8
VisionZip
Backbone=Qwen2.5-VL-7B...
2025.09
54.09
76.03
1,937
78.97
88.7
Feedback
Search any
task
Search any
task