Share your thoughts, 1 month free Claude Pro on us
See more
Home
/
Benchmarks
Visual Question Answering on MMMU-Pro
Loading...
49.27
Accuracy
V3Fusion-Rectify
1.3572
13.7961
26.235
38.6739
Feb 15, 2026
Feb 19, 2026
Feb 23, 2026
Feb 28, 2026
Mar 4, 2026
Mar 8, 2026
Mar 13, 2026
Accuracy
Updated 1mo ago
Evaluation Results
Method
Method
Links
Accuracy
V3Fusion-Rectify
Model ID=234
2026.03
49.27
V3Fusion-MLP
Model ID=234
2026.03
47.34
Qwen2.5-VL-7b-Instruct
Model ID=5
2026.03
46.98
V3Fusion-LED
Model ID=234
2026.03
46.26
Intern-VL2-8b
Model ID=6
2026.03
43.09
DeepSeek-VL2-Small
Model ID=4
2026.03
38
DeepSeek-VL2-Tiny
Model ID=3
2026.03
34.61
LlaVA-v1.6-Vicuna-13b
Model ID=1
2026.03
33.69
LlaVA-v1.6-Vicuna-7b
Model ID=2
2026.03
32.81
LaViDa-R1
Model Category=Unified...
2026.02
32.8
LaViDa-O +SFT
Model Category=Unified...
2026.02
31.9
LaViDa-O
Model Category=Unified...
2026.02
31.2
LaViDa-L
Model Category=Visual-...
2026.02
27.1
MMaDa-8B-Base +CoT SFT
Model Category=Unified...
2026.02
8.4
MMaDa-8B-Base
Model Category=Unified...
2026.02
3.2
Feedback
Search any
task
Search any
task