Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Cross-Context Review: Improving LLM Output Quality by Separating Production and Review Sessions

About

Large language models struggle to catch errors in their own outputs when the review happens in the same session that produced them. This paper introduces Cross-Context Review (CCR), a straightforward method where the review is conducted in a fresh session with no access to the production conversation history. We ran a controlled experiment: 30 artifacts (code, technical documents, presentation scripts) with 150 injected errors, tested under four review conditions -- same-session Self-Review (SR), repeated Self-Review (SR2), context-aware Subagent Review (SA), and Cross-Context Review (CCR). Over 360 reviews, CCR reached an F1 of 28.6%, outperforming SR (24.6%, p=0.008, d=0.52), SR2 (21.7%, p<0.001, d=0.72), and SA (23.8%, p=0.004, d=0.57). The SR2 result matters most for interpretation: reviewing twice in the same session did not beat reviewing once (p=0.11), which rules out repetition as an explanation for CCR's advantage. The benefit comes from context separation itself. CCR works with any model, needs no infrastructure, and costs only one extra session.

Tae-Eun Song• 2026

Related benchmarks

TaskDatasetResultRank
Automated Code and Document Review30 AI-generated artifacts with 150 injected errors
Total Finds4.5
4
Error Discovery30 artifacts
Findings9.3
4
Showing 2 of 2 rows

Other info

Follow for update