Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond a Single Extractor: Re-thinking HTML-to-Text Extraction for LLM Pretraining

About

One of the first pre-processing steps for constructing web-scale LLM pretraining datasets involves extracting text from HTML. Despite the immense diversity of web content, existing open-source datasets predominantly apply a single fixed extractor to all webpages. In this work, we investigate whether this practice leads to suboptimal coverage and utilization of Internet data. We first show that while different extractors may lead to similar model performance on standard language understanding tasks, the pages surviving a fixed filtering pipeline can differ substantially. This suggests a simple intervention: by taking a Union over different extractors, we can increase the token yield of DCLM-Baseline by up to 71% while maintaining benchmark performance. We further show that for structured content such as tables and code blocks, extractor choice can significantly impact downstream task performance, with differences of up to 10 percentage points (p.p.) on WikiTQ and 3 p.p. on HumanEval.

Jeffrey Li, Josh Gardner, Doug Kang, Fangping Shi, Karanjeet Singh, Chun-Liang Li, Herumb Shandilya, David Hall, Oncel Tuzel, Percy Liang, Ludwig Schmidt, Hadi Pour Ansari, Fartash Faghri• 2026

Related benchmarks

TaskDatasetResultRank
Table Question AnsweringWikiTQ (test)--
92
Multi-task Language UnderstandingMMLU
MMLU Score63
28
General Large Language Model EvaluationCore Capabilities Aggregate
Average Score56.1
20
Zero-shot EvaluationDCLM CORE V2
CORE_V2 Score48
17
Showing 4 of 4 rows

Other info

Follow for update