Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-modal Text Recognition Networks: Interactive Enhancements between Visual and Semantic Features

About

Linguistic knowledge has brought great benefits to scene text recognition by providing semantics to refine character sequences. However, since linguistic knowledge has been applied individually on the output sequence, previous methods have not fully utilized the semantics to understand visual clues for text recognition. This paper introduces a novel method, called Multi-modAl Text Recognition Network (MATRN), that enables interactions between visual and semantic features for better recognition performances. Specifically, MATRN identifies visual and semantic feature pairs and encodes spatial information into semantic features. Based on the spatial encoding, visual and semantic features are enhanced by referring to related features in the other modality. Furthermore, MATRN stimulates combining semantic features into visual features by hiding visual clues related to the character in the training phase. Our experiments demonstrate that MATRN achieves state-of-the-art performances on seven benchmarks with large margins, while naive combinations of two modalities show less-effective improvements. Further ablative studies prove the effectiveness of our proposed components. Our implementation is available at https://github.com/wp03052/MATRN.

Byeonghu Na, Yoonsik Kim, Sungrae Park• 2021

Related benchmarks

TaskDatasetResultRank
Scene Text RecognitionSVT (test)
Word Accuracy95
289
Scene Text RecognitionIIIT5K (test)
Word Accuracy96.6
244
Scene Text RecognitionIC15 (test)
Word Accuracy82.8
210
Scene Text RecognitionIC13 (test)
Word Accuracy95.8
207
Scene Text RecognitionSVTP (test)
Word Accuracy90.6
153
Scene Text RecognitionIC13, IC15, IIIT, SVT, SVTP, CUTE80 Average of 6 benchmarks (test)
Average Accuracy93.2
105
Scene Text RecognitionCUTE80 (test)
Accuracy0.935
87
Scene Text RecognitionCUTE (test)
Accuracy93.5
59
Scene Text Recognition6 common benchmarks (test)
Word Accuracy (IIIT)98.2
57
Scene Text RecognitionUnion14M Benchmark
Curve Accuracy80.5
42
Showing 10 of 15 rows

Other info

Code

Follow for update