Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MS-COCO

Benchmarks

Task NameDataset NameSOTA ResultTrend
Image CaptioningMS COCO Karpathy (test)
CIDEr149.1
682
Object DetectionMS COCO (test-dev)
mAP@.578.5
677
Text-to-Image RetrievalMS-COCO 5K (test)
R@168.3
244
Object DetectionMS-COCO 2017 (val)
mAP56.74
237
Image RetrievalMS-COCO 5K (test)
R@167.2
217
Object DetectionMS COCO (val)
mAP0.603
211
Text-to-image generationMS-COCO (val)
FID1.53
202
Text RetrievalMS-COCO 5K (test)
R@184.8
182
Text-to-Image RetrievalMS-COCO
R@165.7
151
Object Hallucination EvaluationMS-COCO (POPE Adversarial)
Accuracy87.62
138
Text-to-image generationMS-COCO 2014 (val)
FID2.47
137
Image-to-Text RetrievalMS-COCO
R@180.5
132
Object DetectionMS COCO novel classes
nAP2,450
132
Text-to-image GenerationMS-COCO
FID5.28
131
Image RetrievalMS-COCO 1K (test)
R@180.1
128
Object DetectionMS COCO novel classes 2017 (val)
AP22.73
123
Image-to-Text RetrievalMS-COCO 1K (test)
R@182
121
Image CaptioningMS COCO (test)
CIDEr140.4
120
Object Hallucination EvaluationMS-COCO POPE (Popular)
Accuracy90.76
108
Text-to-Image GenerationMS-COCO 2017 (val)
FID20.51
100
Image RetrievalMS-COCO (test)
MAP84.08
98
Object DetectionMS-COCO 2017 (test)
AP53.9
82
Multi-label ClassificationMS-COCO 2014 (test)
mAP91.3
81
RetrievalMS-COCO
Ave Aes5.109
72
Object Hallucination EvaluationMS-COCO POPE Random
Accuracy92.36
71
Showing 25 of 385 rows
...