Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Visual-Semantic Transformer for Scene Text Recognition

About

Modeling semantic information is helpful for scene text recognition. In this work, we propose to model semantic and visual information jointly with a Visual-Semantic Transformer (VST). The VST first explicitly extracts primary semantic information from visual feature maps with a transformer module and a primary visual-semantic alignment module. The semantic information is then joined with the visual feature maps (viewed as a sequence) to form a pseudo multi-domain sequence combining visual and semantic information, which is subsequently fed into an transformer-based interaction module to enable learning of interactions between visual and semantic features. In this way, the visual features can be enhanced by the semantic information and vice versus. The enhanced version of visual features are further decoded by a secondary visual-semantic alignment module which shares weights with the primary one. Finally, the decoded visual features and the enhanced semantic features are jointly processed by the third transformer module obtaining the final text prediction. Experiments on seven public benchmarks including regular/ irregular text recognition datasets verifies the effectiveness our proposed model, reaching state of the art on four of the seven benchmarks.

Xin Tang, Yongquan Lai, Ying Liu, Yuanyuan Fu, Rui Fang• 2021

Related benchmarks

TaskDatasetResultRank
Scene Text RecognitionIIIT5K
Accuracy96.3
149
Scene Text RecognitionCUTE
Accuracy95.1
92
Scene Text RecognitionIC15
Accuracy85.4
86
Scene Text RecognitionSVT
Accuracy93.8
67
Scene Text RecognitionSVTP
Accuracy88.7
52
Scene Text RecognitionIC 2013 (test)
Accuracy96.4
51
Showing 6 of 6 rows

Other info

Follow for update