Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SelfDoc: Self-Supervised Document Representation Learning

About

We propose SelfDoc, a task-agnostic pre-training framework for document image understanding. Because documents are multimodal and are intended for sequential reading, our framework exploits the positional, textual, and visual information of every semantically meaningful component in a document, and it models the contextualization between each block of content. Unlike existing document pre-training models, our model is coarse-grained instead of treating individual words as input, therefore avoiding an overly fine-grained with excessive contextualization. Beyond that, we introduce cross-modal learning in the model pre-training phase to fully leverage multimodal information from unlabeled documents. For downstream usage, we propose a novel modality-adaptive attention mechanism for multimodal feature fusion by adaptively emphasizing language and vision signals. Our framework benefits from self-supervised pre-training on documents without requiring annotations by a feature masking training strategy. It achieves superior performance on multiple downstream tasks with significantly fewer document images used in the pre-training stage compared to previous works.

Peizhao Li, Jiuxiang Gu, Jason Kuen, Vlad I. Morariu, Handong Zhao, Rajiv Jain, Varun Manjunatha, Hongfu Liu• 2021

Related benchmarks

TaskDatasetResultRank
Document ClassificationRVL-CDIP (test)
Accuracy93.81
306
Entity extractionFUNSD (test)
Entity F1 Score83.36
104
Form UnderstandingFUNSD (test)
F1 Score83.36
73
Information ExtractionFUNSD (test)
F1 Score83.36
55
Semantic Entity RecognitionFUNSD (test)
F1 Score83.36
37
Semantic Entity RecognitionFUNSD--
31
Document Image ClassificationRVL-CDIP 1.0 (test)
Accuracy92.81
25
Information ExtractionFUNSD v1 (test)
F1 Score83.36
13
Form UnderstandingFUNSD
Entity F1 Score83.36
11
Showing 9 of 9 rows

Other info

Follow for update