Share your thoughts, 1 month free Claude Pro on us
See more
Home
/
Benchmarks
Vision-Language Understanding on MMBench cn
Loading...
60.6
Accuracy
Vanilla
51.552
53.901
56.25
58.599
May 15, 2025
Jun 28, 2025
Aug 11, 2025
Sep 24, 2025
Nov 7, 2025
Dec 21, 2025
Feb 4, 2026
Accuracy
Updated 1mo ago
Evaluation Results
Method
Method
Links
Accuracy
Vanilla
Backbone=LLaVA-NeXT-7B...
2026.02
60.6
Vanilla
Base Model=LLaVA-Next-...
2025.05
60.6
PIO-FVLM
Backbone=LLaVA-NeXT-7B...
2026.02
59.5
MoB
Base Model=LLaVA-Next-...
2025.05
58.9
DART
Backbone=LLaVA-NeXT-7B...
2026.02
58.2
Vanilla
Base Model=LLaVA-1.5-7...
2025.05
58.1
MoB
Base Model=LLaVA-1.5-7...
2025.05
57.8
HoloV
Backbone=LLaVA-NeXT-7B...
2026.02
57.5
MoB
Base Model=LLaVA-1.5-7...
2025.05
57.5
CDPruner
Backbone=LLaVA-NeXT-7B...
2026.02
55.7
VisionZip
Backbone=LLaVA-NeXT-7B...
2026.02
55.6
SparseVLM
Backbone=LLaVA-NeXT-7B...
2026.02
54.5
MoB
Base Model=LLaVA-1.5-7...
2025.05
54.5
FastV
Backbone=LLaVA-NeXT-7B...
2026.02
51.9
Feedback
Search any
task
Search any
task