Our new X account is live! Follow @wizwand_team for updates
Home
/
Benchmarks
Open-ended Generation on COCO 2014 (val)
Loading...
8.58
Accuracy
LogicCheckGPT
3.2344
4.6222
6.01
7.3978
Feb 18, 2024
Accuracy
Relevance Score
Updated 3d ago
Evaluation Results
Method
Method
Links
Accuracy
Relevance Score
LogicCheckGPT
Base Model=QWEN-VL-Chat
2024.02
8.58
9.96
vanilla
Base Model=QWEN-VL-Chat
2024.02
8.36
9.96
LogicCheckGPT
Base Model=LLaVA-1.5
2024.02
6.5
7.64
LogicCheckGPT
Base Model=MiniGPT-4
2024.02
6.02
8.38
vanilla
Base Model=LLaVA-1.5
2024.02
5.22
7.24
vanilla
Base Model=MiniGPT-4
2024.02
5
7.96
LogicCheckGPT
Base Model=mPLUG-Owl
2024.02
4.32
8.74
vanilla
Base Model=mPLUG-Owl
2024.02
3.44
8.78
Feedback
Search any
task
Search any
task