LayoutMask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding
About
Visually-rich Document Understanding (VrDU) has attracted much research attention over the past years. Pre-trained models on a large number of document images with transformer-based backbones have led to significant performance gains in this field. The major challenge is how to fusion the different modalities (text, layout, and image) of the documents in a unified model with different pre-training tasks. This paper focuses on improving text-layout interactions and proposes a novel multi-modal pre-training model, LayoutMask. LayoutMask uses local 1D position, instead of global 1D position, as layout input and has two pre-training objectives: (1) Masked Language Modeling: predicting masked tokens with two novel masking strategies; (2) Masked Position Modeling: predicting masked 2D positions to improve layout representation learning. LayoutMask can enhance the interactions between text and layout modalities in a unified model and produce adaptive and robust multi-modal representations for downstream tasks. Experimental results show that our proposed method can achieve state-of-the-art results on a wide variety of VrDU problems, including form understanding, receipt understanding, and document image classification.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Document Classification | RVL-CDIP (test) | Accuracy93.8 | 306 | |
| Information Extraction | CORD (test) | F1 Score97.19 | 133 | |
| Entity extraction | FUNSD (test) | Entity F1 Score93.2 | 104 | |
| Information Extraction | SROIE (test) | F1 Score97.27 | 58 | |
| Semantic Entity Recognition | CORD | F1 Score97.19 | 55 | |
| Semantic Entity Recognition | FUNSD | EN Score93.2 | 31 | |
| Semantic Entity Recognition | SROIE | SER Accuracy97.27 | 15 |