Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DocPedia: Unleashing the Power of Large Multimodal Model in the Frequency Domain for Versatile Document Understanding

About

This work presents DocPedia, a novel large multimodal model (LMM) for versatile OCR-free document understanding, capable of parsing images up to 2,560$\times$2,560 resolution. Unlike existing work either struggle with high-resolution documents or give up the large language model thus vision or language ability constrained, our DocPedia directly processes visual input in the frequency domain rather than the pixel space. The unique characteristic enables DocPedia to capture a greater amount of visual and textual information using a limited number of visual tokens. To consistently enhance both perception and comprehension abilities of our model, we develop a dual-stage training strategy and enrich instructions/annotations of all training tasks covering multiple document types. Extensive quantitative and qualitative experiments conducted on various publicly available benchmarks confirm the mutual benefits of jointly learning perception and comprehension tasks. The results provide further evidence of the effectiveness and superior performance of our DocPedia over other methods.

Hao Feng, Qi Liu, Hao Liu, Jingqun Tang, Wengang Zhou, Houqiang Li, Can Huang• 2023

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA
Accuracy60.2
1117
Text-based Visual Question AnsweringTextVQA
Accuracy60.2
496
Chart Question AnsweringChartQA
Accuracy46.9
229
Document Visual Question AnsweringDocVQA
ANLS47.1
164
Visual Question AnsweringTextVQA (test)
Accuracy60.2
124
Visual Question AnsweringOCR-VQA (test)
Accuracy57.2
77
Document-oriented Visual Question AnsweringDocVQA
Accuracy47.1
72
Document Visual Question AnsweringInfoVQA
ANLS15.2
32
Scene Text Visual Question AnsweringST-VQA (test)--
21
Information ExtractionSROIE
F1 Score21.4
16
Showing 10 of 13 rows

Other info

Follow for update