Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LayoutMask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding

About

Visually-rich Document Understanding (VrDU) has attracted much research attention over the past years. Pre-trained models on a large number of document images with transformer-based backbones have led to significant performance gains in this field. The major challenge is how to fusion the different modalities (text, layout, and image) of the documents in a unified model with different pre-training tasks. This paper focuses on improving text-layout interactions and proposes a novel multi-modal pre-training model, LayoutMask. LayoutMask uses local 1D position, instead of global 1D position, as layout input and has two pre-training objectives: (1) Masked Language Modeling: predicting masked tokens with two novel masking strategies; (2) Masked Position Modeling: predicting masked 2D positions to improve layout representation learning. LayoutMask can enhance the interactions between text and layout modalities in a unified model and produce adaptive and robust multi-modal representations for downstream tasks. Experimental results show that our proposed method can achieve state-of-the-art results on a wide variety of VrDU problems, including form understanding, receipt understanding, and document image classification.

Yi Tu, Ya Guo, Huan Chen, Jinyang Tang• 2023

Related benchmarks

TaskDatasetResultRank
Document ClassificationRVL-CDIP (test)
Accuracy93.8
306
Information ExtractionCORD (test)
F1 Score97.19
133
Entity extractionFUNSD (test)
Entity F1 Score93.2
104
Information ExtractionSROIE (test)
F1 Score97.27
58
Semantic Entity RecognitionCORD
F1 Score97.19
55
Semantic Entity RecognitionFUNSD
EN Score93.2
31
Semantic Entity RecognitionSROIE
SER Accuracy97.27
15
Showing 7 of 7 rows

Other info

Follow for update