Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis

About

Many new proposals for scene text recognition (STR) models have been introduced in recent years. While each claim to have pushed the boundary of the technology, a holistic and fair comparison has been largely missing in the field due to the inconsistent choices of training and evaluation datasets. This paper addresses this difficulty with three major contributions. First, we examine the inconsistencies of training and evaluation datasets, and the performance gap results from inconsistencies. Second, we introduce a unified four-stage STR framework that most existing STR models fit into. Using this framework allows for the extensive evaluation of previously proposed STR modules and the discovery of previously unexplored module combinations. Third, we analyze the module-wise contributions to performance in terms of accuracy, speed, and memory demand, under one consistent set of training and evaluation datasets. Such analyses clean up the hindrance on the current comparisons to understand the performance gain of the existing modules.

Jeonghun Baek, Geewook Kim, Junyeop Lee, Sungrae Park, Dongyoon Han, Sangdoo Yun, Seong Joon Oh, Hwalsuk Lee• 2019

Related benchmarks

TaskDatasetResultRank
Scene Text RecognitionSVT (test)
Word Accuracy91.5
289
Scene Text RecognitionIIIT5K (test)
Word Accuracy87.9
244
Scene Text RecognitionIC15 (test)
Word Accuracy85.15
210
Scene Text RecognitionIC13 (test)
Word Accuracy94.63
207
Scene Text RecognitionSVTP (test)
Word Accuracy93.7
153
Scene Text RecognitionIIIT5K
Accuracy87.9
149
Scene Text RecognitionSVT 647 (test)
Accuracy87.5
101
Scene Text RecognitionCUTE
Accuracy71.8
92
Scene Text RecognitionCUTE80 (test)
Accuracy0.74
87
Scene Text RecognitionIC15
Accuracy89.8
86
Showing 10 of 62 rows

Other info

Code

Follow for update