Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Benchmarking Large Multimodal Models against Common Corruptions

About

This technical report aims to fill a deficiency in the assessment of large multimodal models (LMMs) by specifically examining the self-consistency of their outputs when subjected to common corruptions. We investigate the cross-modal interactions between text, image, and speech, encompassing four essential generation tasks: text-to-image, image-to-text, text-to-speech, and speech-to-text. We create a comprehensive benchmark, named MMCBench, that covers more than 100 popular LMMs (totally over 150 model checkpoints). A thorough evaluation under common corruptions is critical for practical deployment and facilitates a better understanding of the reliability of cutting-edge LMMs. The benchmarking code is available at https://github.com/sail-sg/MMCBench

Jiawei Zhang, Tianyu Pang, Chao Du, Yi Ren, Bo Li, Min Lin• 2024

Related benchmarks

TaskDatasetResultRank
Multimodal Reward ModelingVL-RewardBench
Accuracy19.04
17
Multimodal Reward ModelingMultimodal RewardBench
Accuracy42
17
Multimodal Reward ModelingMM-RLHF-RewardBench
Accuracy17.1
9
Multimodal Reward ModelingVL-RewardBench, Multimodal RewardBench, and MM-RLHF-RewardBench Aggregate
Accuracy26.05
9
Showing 4 of 4 rows

Other info

Follow for update