Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Localize, Group, and Select: Boosting Text-VQA by Scene Text Modeling

About

As an important task in multimodal context understanding, Text-VQA (Visual Question Answering) aims at question answering through reading text information in images. It differentiates from the original VQA task as Text-VQA requires large amounts of scene-text relationship understanding, in addition to the cross-modal grounding capability. In this paper, we propose Localize, Group, and Select (LOGOS), a novel model which attempts to tackle this problem from multiple aspects. LOGOS leverages two grounding tasks to better localize the key information of the image, utilizes scene text clustering to group individual OCR tokens, and learns to select the best answer from different sources of OCR (Optical Character Recognition) texts. Experiments show that LOGOS outperforms previous state-of-the-art methods on two Text-VQA benchmarks without using additional OCR annotation data. Ablation studies and analysis demonstrate the capability of LOGOS to bridge different modalities and better understand scene text.

Xiaopeng Lu, Zhen Fan, Yansen Wang, Jean Oh, Carolyn P. Rose• 2021

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA (val)
VQA Score51.53
309
Visual Question AnsweringTextVQA (test)
Accuracy51.1
124
Visual Question AnsweringTextVQA v1.0 (val)
Accuracy51.53
69
Scene Text Visual Question AnsweringST-VQA (val)
ANLS0.581
30
Visual Question AnsweringTextVQA v1.0 (test)
Accuracy51.08
27
Scene Text Visual Question AnsweringST-VQA (test)
ANLS0.579
21
Scene Text Visual Question AnsweringST-VQA 1.0 (val)
ANLS58.1
15
Scene Text Visual Question AnsweringST-VQA 1.0 (test)
ANLS57.9
14
Scene Text Visual Question AnsweringST-VQA 8 (test)
ANLS57.9
10
Scene Text Visual Question AnsweringST-VQA 8 (val)
Accuracy0.4863
8
Showing 10 of 10 rows

Other info

Follow for update