Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Single-Shot Arbitrarily-Shaped Text Detector based on Context Attended Multi-Task Learning

About

Detecting scene text of arbitrary shapes has been a challenging task over the past years. In this paper, we propose a novel segmentation-based text detector, namely SAST, which employs a context attended multi-task learning framework based on a Fully Convolutional Network (FCN) to learn various geometric properties for the reconstruction of polygonal representation of text regions. Taking sequential characteristics of text into consideration, a Context Attention Block is introduced to capture long-range dependencies of pixel information to obtain a more reliable segmentation. In post-processing, a Point-to-Quad assignment method is proposed to cluster pixels into text instances by integrating both high-level object knowledge and low-level pixel information in a single shot. Moreover, the polygonal representation of arbitrarily-shaped text can be extracted with the proposed geometric properties much more effectively. Experiments on several benchmarks, including ICDAR2015, ICDAR2017-MLT, SCUT-CTW1500, and Total-Text, demonstrate that SAST achieves better or comparable performance in terms of accuracy. Furthermore, the proposed algorithm runs at 27.63 FPS on SCUT-CTW1500 with a Hmean of 81.0% on a single NVIDIA Titan Xp graphics card, surpassing most of the existing segmentation-based methods.

Pengfei Wang, Chengquan Zhang, Fei Qi, Zuming Huang, Mengyi En, Junyu Han, Jingtuo Liu, Errui Ding, Guangming Shi• 2019

Related benchmarks

TaskDatasetResultRank
Text DetectionCTW1500 (test)
Precision85.3
157
Text DetectionTotal-Text (test)
F-Measure80.2
126
Text DetectionICDAR 2015 (test)
F1 Score86.9
108
Scene Text DetectionTotal-Text
Precision83.8
63
Showing 4 of 4 rows

Other info

Follow for update