Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Layout Agnostic Scene Text Image Synthesis with Diffusion Models

About

While diffusion models have significantly advanced the quality of image generation their capability to accurately and coherently render text within these images remains a substantial challenge. Conventional diffusion-based methods for scene text generation are typically limited by their reliance on an intermediate layout output. This dependency often results in a constrained diversity of text styles and fonts an inherent limitation stemming from the deterministic nature of the layout generation phase. To address these challenges this paper introduces SceneTextGen a novel diffusion-based model specifically designed to circumvent the need for a predefined layout stage. By doing so SceneTextGen facilitates a more natural and varied representation of text. The novelty of SceneTextGen lies in its integration of three key components: a character-level encoder for capturing detailed typographic properties coupled with a character-level instance segmentation model and a word-level spotting model to address the issues of unwanted text generation and minor character inaccuracies. We validate the performance of our method by demonstrating improved character recognition rates on generated images across different public visual text datasets in comparison to both standard diffusion based methods and text specific methods.

Qilong Zhangli, Jindong Jiang, Di Liu, Licheng Yu, Xiaoliang Dai, Ankit Ramchandani, Guan Pang, Dimitris N. Metaxas, Praveen Krishnan• 2024

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationMARIO-Eval
CLIPScore0.3455
25
OCR-based Text RecognitionMARIO-7M (test)
AP52.74
7
OCR-based Text RecognitionTMDB (test)
AP38.13
7
OCR-based Text RecognitionOpenLibrary (test)
AP41.36
7
Showing 4 of 4 rows

Other info

Follow for update