olmOCR 2: Unit Test Rewards for Document OCR
About
We present olmOCR 2, the latest in our family of powerful OCR systems for converting digitized print documents, like PDFs, into clean, naturally ordered plain text. olmOCR 2 is powered by olmOCR-2-7B-1025, a specialized, 7B vision language model (VLM) trained using reinforcement learning with verifiable rewards (RLVR), where our rewards are a diverse set of binary unit tests. To scale unit test creation, we develop a pipeline for generating synthetic documents with diverse and challenging layouts, known ground-truth HTML source code, and extracted test cases. We show that RL training on these test cases results in state-of-the-art performance on olmOCR-Bench, our English-language OCR benchmark, with the largest improvements in math formula conversion, table parsing, and multi-column layouts compared to previous versions. We release our model, data and code under permissive open licenses.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Document Parsing | olmOCR-bench | ArXiv Processing Accuracy82.9 | 36 | |
| Full-page OCR | English Fox | Page CER1.8 | 12 | |
| Color-guided OCR | English Fox | Color CER55.8 | 12 | |
| Line-level OCR | English Fox | Line CER87.8 | 12 | |
| Region-level OCR | English Fox | Region CER69.7 | 12 | |
| Grounded OCR | OCR-IDL, TabMe++, and PubMed-OCR (10.5K held-out pages) | CER (text)0.365 | 11 | |
| Document Parsing | OmniDocBench | Overall Accuracy80.03 | 4 |