Our new X account is live! Follow @wizwand_team for updates
Home
/
Benchmarks
Open-ended Visual Question Answering on LLaVA Bench v1 (test)
Loading...
37.18
Relevance
DRESS
24.5128
27.8014
31.09
34.3786
Nov 16, 2023
Relevance
Accuracy
Level of Detail
Helpfulness
Updated 3d ago
Evaluation Results
Method
Method
Links
Relevance
Accuracy
Level of Detail
Helpfulness
DRESS
prefix=<excellent> [Ni...
2023.11
37.18
20.12
21.87
26.45
mPLUG
LLM=LLaMA-7B
2023.11
35.17
20.33
16.33
20.33
LLaVA-HF
LLM=Vicuna-13B
2023.11
34.33
18.5
17.67
23.5
InstructBLIP
LLM=Vicuna-13B
2023.11
34
21
19.67
22.67
miniGPT4
LLM=Vicuna-13B
2023.11
32.45
20.33
20.17
24.17
LLaVA
LLM=LLaMA-13B
2023.11
31.83
19.83
18.67
20.83
BLIP-2
LLM=T5-XXL
2023.11
25
16
16
17.67
Feedback
Search any
task
Search any
task