Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ABINet++: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Spotting

About

Scene text spotting is of great importance to the computer vision community due to its wide variety of applications. Recent methods attempt to introduce linguistic knowledge for challenging recognition rather than pure visual classification. However, how to effectively model the linguistic rules in end-to-end deep networks remains a research challenge. In this paper, we argue that the limited capacity of language models comes from 1) implicit language modeling; 2) unidirectional feature representation; and 3) language model with noise input. Correspondingly, we propose an autonomous, bidirectional and iterative ABINet++ for scene text spotting. Firstly, the autonomous suggests enforcing explicitly language modeling by decoupling the recognizer into vision model and language model and blocking gradient flow between both models. Secondly, a novel bidirectional cloze network (BCN) as the language model is proposed based on bidirectional feature representation. Thirdly, we propose an execution manner of iterative correction for the language model which can effectively alleviate the impact of noise input. Finally, to polish ABINet++ in long text recognition, we propose to aggregate horizontal features by embedding Transformer units inside a U-Net, and design a position and content attention module which integrates character order and content to attend to character features precisely. ABINet++ achieves state-of-the-art performance on both scene text recognition and scene text spotting benchmarks, which consistently demonstrates the superiority of our method in various environments especially on low-quality images. Besides, extensive experiments including in English and Chinese also prove that, a text spotter that incorporates our language modeling method can significantly improve its performance both in accuracy and speed compared with commonly used attention-based recognizers.

Shancheng Fang, Zhendong Mao, Hongtao Xie, Yuxin Wang, Chenggang Yan, Yongdong Zhang• 2022

Related benchmarks

TaskDatasetResultRank
Scene Text RecognitionIC13, IC15, IIIT, SVT, SVTP, CUTE80 Average of 6 benchmarks (test)
Average Accuracy91.93
105
Scene Text RecognitionSVT 647 (test)
Accuracy97.1
101
Scene Text RecognitionCUTE 288 samples (test)
Word Accuracy94.4
98
End-to-End Text SpottingICDAR 2015
Strong Score86.1
80
End-to-End Scene Text SpottingTotal-Text
Hmean (None)79.4
55
Scene Text RecognitionSVTP 645 (test)
Accuracy92.2
54
Scene Text RecognitionIIIT 3000 (test)
Accuracy97.2
35
Text RecognitionChinese text recognition benchmark
Scene Acc66.6
33
Scene Text RecognitionIC15 1811 (test)
Accuracy89.2
30
Scene Text RecognitionIC13 857 (test)
Accuracy98.1
27
Showing 10 of 17 rows

Other info

Code

Follow for update