Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PreSTU: Pre-Training for Scene-Text Understanding

About

The ability to recognize and reason about text embedded in visual inputs is often lacking in vision-and-language (V&L) models, perhaps because V&L pre-training methods have often failed to include such an ability in their training objective. In this paper, we propose PreSTU, a novel pre-training recipe dedicated to scene-text understanding (STU). PreSTU introduces OCR-aware pre-training objectives that encourage the model to recognize text from an image and connect it to the rest of the image content. We implement PreSTU using a simple transformer-based encoder-decoder architecture, combined with large-scale image-text datasets with scene text obtained from an off-the-shelf OCR system. We empirically demonstrate the effectiveness of this pre-training approach on eight visual question answering and four image captioning benchmarks.

Jihyung Kil, Soravit Changpinyo, Xi Chen, Hexiang Hu, Sebastian Goodman, Wei-Lun Chao, Radu Soricut• 2022

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA (test)
Accuracy56.3
124
Scene Text Visual Question AnsweringST-VQA 1.0 (test)
ANLS65.5
14
Visual Question AnsweringViTextVQA (test)
F1 Score44.93
10
Visual Question AnsweringViSignVQA (test)
F1 Score50.37
7
Showing 4 of 4 rows

Other info

Follow for update