Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

M^2ConceptBase: A Fine-Grained Aligned Concept-Centric Multimodal Knowledge Base

About

Multimodal knowledge bases (MMKBs) provide cross-modal aligned knowledge crucial for multimodal tasks. However, the images in existing MMKBs are generally collected for entities in encyclopedia knowledge graphs. Therefore, detailed groundings of visual semantics with linguistic concepts are lacking, which are essential for the visual concept cognition ability of multimodal models. Addressing this gap, we introduce M^2ConceptBase, the first concept-centric MMKB. M^2ConceptBase models concepts as nodes with associated images and detailed textual descriptions. We propose a context-aware multimodal symbol grounding approach to align concept-image and concept-description pairs using context information from image-text datasets. Comprising 951K images and 152K concepts, M^2ConceptBase links each concept to an average of 6.27 images and a single description, ensuring comprehensive visual and textual semantics. Human studies confirm more than 95% alignment accuracy, underscoring its quality. Additionally, our experiments demonstrate that M^2ConceptBase significantly enhances VQA model performance on the OK-VQA task. M^2ConceptBase also substantially improves the fine-grained concept understanding capabilities of multimodal large language models through retrieval augmentation in two concept-related tasks, highlighting its value.

Zhiwei Zha, Jiaan Wang, Zhixu Li, Xiangru Zhu, Wei Song, Yanghua Xiao• 2023

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringVCGPT (test)
Model-as-Judge Score42.78
12
Audio Question AnsweringAudioCaps-QA (test)
Model-as-Judge Score49.78
12
Audio-Visual Question AnsweringVALOR (test)
M.J. Score32.31
12
Showing 3 of 3 rows

Other info

Follow for update