Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DREAM: Deep Research Evaluation with Agentic Metrics

About

Deep Research Agents generate analyst-grade reports, yet evaluating them remains challenging due to the absence of a single ground truth and the multidimensional nature of research quality. Recent benchmarks propose distinct methodologies, yet they suffer from the Mirage of Synthesis, where strong surface-level fluency and citation alignment can obscure underlying factual and reasoning defects. We characterize this gap by introducing a taxonomy across four verticals that exposes a critical capability mismatch: static evaluators inherently lack the tool-use capabilities required to assess temporal validity and factual correctness. To address this, we propose DREAM (Deep Research Evaluation with Agentic Metrics), a framework that instantiates the principle of capability parity by making evaluation itself agentic. DREAM structures assessment through an evaluation protocol combining query-agnostic metrics with adaptive metrics generated by a tool-calling agent, enabling temporally aware coverage, grounded verification, and systematic reasoning probes. Controlled evaluations demonstrate DREAM is significantly more sensitive to factual and temporal decay than existing benchmarks, offering a scalable, reference-free evaluation paradigm.

Elad Ben Avraham, Changhao Li, Ron Dorfman, Roy Ganz, Oren Nuriel, Amir Dudai, Aviad Aberdam, Noah Flynn, Elman Mansimov, Adi Kalyanpur, Ron Litman• 2026

Related benchmarks

TaskDatasetResultRank
Deep Research EvaluationDeepResearchBench--
3
Deep Research EvaluationLiveResearchBench--
3
Deep Research EvaluationRESEARCHRUBRICS--
3
Deep Research EvaluationAggregate--
3
Temporal Sensitivity Analysis20 Topic Queries DRA (avg)--
3
Showing 5 of 5 rows

Other info

Follow for update