Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FastTextSpotter: A High-Efficiency Transformer for Multilingual Scene Text Spotting

About

The proliferation of scene text in both structured and unstructured environments presents significant challenges in optical character recognition (OCR), necessitating more efficient and robust text spotting solutions. This paper presents FastTextSpotter, a framework that integrates a Swin Transformer visual backbone with a Transformer Encoder-Decoder architecture, enhanced by a novel, faster self-attention unit, SAC2, to improve processing speeds while maintaining accuracy. FastTextSpotter has been validated across multiple datasets, including ICDAR2015 for regular texts and CTW1500 and TotalText for arbitrary-shaped texts, benchmarking against current state-of-the-art models. Our results indicate that FastTextSpotter not only achieves superior accuracy in detecting and recognizing multilingual scene text (English and Vietnamese) but also improves model efficiency, thereby setting new benchmarks in the field. This study underscores the potential of advanced transformer architectures in improving the adaptability and speed of text spotting applications in diverse real-world settings. The dataset, code, and pre-trained models have been released in our Github.

Alloy Das, Sanket Biswas, Umapada Pal, Josep Llad\'os, Saumik Bhattacharya• 2024

Related benchmarks

TaskDatasetResultRank
Scene Text SpottingTotal-Text (test)--
105
End-to-End Text SpottingICDAR 2015 (test)
Generic F-measure75.4
62
End-to-End Text SpottingSCUT-CTW1500 (test)
F-Measure (None Config)56
34
Showing 3 of 3 rows

Other info

Follow for update