Our new X account is live! Follow @wizwand_team for updates
Home
/
Benchmarks
Object Hallucination Evaluation on POPE 28
Loading...
88.9
Accuracy
LAF-7B
78.5
81.2
83.9
86.6
Jul 5, 2024
Accuracy
Updated 3d ago
Evaluation Results
Method
Method
Links
Accuracy
LAF-7B
LM=Vicuna (7B), Res.=3...
2024.07
88.9
LLaVA-1.5+ (Ours)
LM=Vicuna(7B), Res.=33...
2024.07
88.9
Mipha-3B+ (Ours)
LM=Phi-2 (2.7B), Res.=...
2024.07
88.7
Imp-v1
LM=Phi-2 (2.7B), Res.=...
2024.07
88
Mipha-3B
LM=Phi-2 (2.7B), Res.=...
2024.07
86.7
Bunny-3B
LM=Phi-2 (2.7B), Res.=...
2024.07
86.4
LLaVA-1.5
LM=Vicuna (7B), Res.=3...
2024.07
85.9
mPLUG-Owl2
LM=LLAMA (7B), Res.=44...
2024.07
85.8
TinyLLaVA
LM=Phi-2 (2.7B), Res.=...
2024.07
85.7
BLIP-2
LM=Vicuna (13B), Res.=...
2024.07
85.3
LLaVA-Phi
LM=Phi-2 (2.7B), Res.=...
2024.07
85
Mobile VLM-3B
LM=M-LLaMA (2.7B), Res...
2024.07
84.9
Mobile VLM-v2-3B
LM=M-LLaMA (2.7B), Res...
2024.07
84.7
Mobile VLM-1.7B
LM=M-LLaMA (1.4B), Res...
2024.07
84.5
Mobile VLM-v2-1.7B
LM=M-LLaMA (1.4B), Res...
2024.07
84.3
MC-LLaVA
LM=Phi-2 (2.7B), Res.=...
2024.07
80.6
InstructBLIP
LM=Vicuna (7B), Res.=2...
2024.07
78.9
InstructBLIP
LM=Vicuna (13B), Res.=...
2024.07
78.9
Feedback
Search any
task
Search any
task