Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Augmenting Multimodal LLMs with Self-Reflective Tokens for Knowledge-based Visual Question Answering

About

Multimodal LLMs (MLLMs) are the natural extension of large language models to handle multimodal inputs, combining text and image data. They have recently garnered attention due to their capability to address complex tasks involving both modalities. However, their effectiveness is limited to the knowledge acquired during training, which restricts their practical utility. In this work, we introduce a novel method to enhance the adaptability of MLLMs by integrating external knowledge sources. Our proposed model, Reflective LLaVA (ReflectiVA), utilizes reflective tokens to dynamically determine the need for external knowledge and predict the relevance of information retrieved from an external database. Tokens are trained following a two-stage two-model training recipe. This ultimately enables the MLLM to manage external knowledge while preserving fluency and performance on tasks where external knowledge is not needed. Through our experiments, we demonstrate the efficacy of ReflectiVA for knowledge-based visual question answering, highlighting its superior performance compared to existing methods. Source code and trained models are publicly available at https://aimagelab.github.io/ReflectiVA.

Federico Cocchi, Nicholas Moratelli, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringE-VQA (test)
Accuracy35.5
85
Visual Question AnsweringEnc-VQA (test)
Single-Hop Accuracy36.8
84
Visual Question AnsweringInfoSeek (test)--
81
Knowledge-Intensive Visual Question AnsweringInfoSeek (val)
Accuracy (All)40.2
50
Visual Question AnsweringInfoSeek (val)
Overall Accuracy43.9
38
Knowledge-Intensive Visual Question AnsweringE-VQA (test)
Accuracy (All)35.5
34
Visual Question AnsweringInfoSeek
Overall Score40.1
30
Knowledge-based Visual Question AnsweringE-VQA Single-Hop
Accuracy36.8
27
Knowledge-based Visual Question AnsweringINFOSEEK Unseen Question
Accuracy43.5
19
Knowledge-based Visual Question AnsweringINFOSEEK (Unseen Entity)
Accuracy44.3
19
Showing 10 of 24 rows

Other info

Code

Follow for update