Our new X account is live! Follow @wizwand_team for updates
Home
/
Benchmarks
Multi-modal Understanding on MME v1.0 (test)
Loading...
1,531.3
MME^P Score
LLaVA-1.5-13B
1,428.756
1,455.378
1,482
1,508.622
May 23, 2024
MME^P Score
MME^C Score
Updated 3d ago
Evaluation Results
Method
Method
Links
MME^P Score
MME^C Score
LLaVA-1.5-13B
Model Size=13B
2024.05
1,531.3
295.4
CSR
Backbone=LLaVA-1.5-13B
2024.05
1,530.6
303.9
Self-rewarding
Backbone=LLaVA-1.5-13B
2024.05
1,529
300.1
CSR
Backbone=LLaVA-1.5-7B
2024.05
1,524.2
367.9
LLaVA-1.5-7B
Model Size=7B
2024.05
1,510.7
348.2
Self-rewarding
Backbone=LLaVA-1.5-7B
2024.05
1,505.6
362.5
Human-Prefer
Backbone=LLaVA-1.5-7B
2024.05
1,490.6
335
RLHF-V
Backbone=LLaVA-1.5-7B
2024.05
1,489.2
349.4
POVID
Backbone=LLaVA-1.5-7B
2024.05
1,452.8
325.3
Vlfeedback
Backbone=LLaVA-1.5-7B
2024.05
1,432.7
321.8
Feedback
Search any
task
Search any
task