Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

olmOCR: Unlocking Trillions of Tokens in PDFs with Vision Language Models

About

PDF documents have the potential to provide trillions of novel, high-quality tokens for training language models. However, these documents come in a diversity of types with differing formats and visual layouts that pose a challenge when attempting to extract and faithfully represent the underlying content for language model use. Traditional open source tools often produce lower quality extractions compared to vision language models (VLMs), but reliance on the best VLMs can be prohibitively costly (e.g., over 6,240 USD per million PDF pages for GPT-4o) or infeasible if the PDFs cannot be sent to proprietary APIs. We present olmOCR, an open-source toolkit for processing PDFs into clean, linearized plain text in natural reading order while preserving structured content like sections, tables, lists, equations, and more. Our toolkit runs a fine-tuned 7B vision language model (VLM) trained on olmOCR-mix-0225, a sample of 260,000 pages from over 100,000 crawled PDFs with diverse properties, including graphics, handwritten text and poor quality scans. olmOCR is optimized for large-scale batch processing, able to scale flexibly to different hardware setups and can convert a million PDF pages for only 176 USD. To aid comparison with existing systems, we also introduce olmOCR-Bench, a curated set of 1,400 PDFs capturing many content types that remain challenging even for the best tools and VLMs, including formulas, tables, tiny fonts, old scans, and more. We find olmOCR outperforms even top VLMs including GPT-4o, Gemini Flash 2 and Qwen-2.5-VL. We openly release all components of olmOCR: our fine-tuned VLM model, training code and data, an efficient inference pipeline that supports vLLM and SGLang backends, and benchmark olmOCR-Bench.

Jake Poznanski, Aman Rangapur, Jon Borchardt, Jason Dunkelberger, Regan Huff, Daniel Lin, Aman Rangapur, Christopher Wilhelm, Kyle Lo, Luca Soldaini• 2025

Related benchmarks

TaskDatasetResultRank
Document ParsingOmniDocBench v1.5
Overall Score81.79
126
Document ParsingolmOCR-bench
ArXiv Processing Accuracy74.9
36
Document ParsingOmniDocBench 1.5 (test)
Overall Score81.79
27
OCR-related Parsing TasksOmniDocBench English
Edit Distance0.097
23
Document ParsingDocPTBench Chinese
Overall Edit Distance46.1
18
Document ParsingDocPTBench English
Overall Edit Distance39.1
18
Document RetrievalOHR-Bench Retrieval
Accuracy (Text)72.5
14
Document Text GenerationOHR-Bench Generation
Text Score44.8
14
Textual RAGOHR-Bench (Overall)
TXT Score0.406
14
OCRFox-Pages
Normalized Edit Distance0.023
13
Showing 10 of 10 rows

Other info

Follow for update