Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

RECAP: Reproducing Copyrighted Data from LLMs Training with an Agentic Pipeline

About

If we cannot inspect the training data of a large language model (LLM), how can we ever know what it has seen? We believe the most compelling evidence arises when the model itself freely reproduces the target content. As such, we propose RECAP, an agentic pipeline designed to elicit and verify memorized training data from LLM outputs. At the heart of RECAP is a feedback-driven loop, where an initial extraction attempt is evaluated by a secondary language model, which compares the output against a reference passage and identifies discrepancies. These are then translated into minimal correction hints, which are fed back into the target model to guide subsequent generations. In addition, to address alignment-induced refusals, RECAP includes a jailbreaking module that detects and overcomes such barriers. We evaluate RECAP on EchoTrace, a new benchmark spanning over 30 full books, and the results show that RECAP leads to substantial gains over single-iteration approaches. For instance, with GPT-4.1, the average ROUGE-L score for the copyrighted text extraction improved from 0.38 to 0.47 - a nearly 24% increase.

Andr\'e V. Duarte, Xuying li, Bin Zeng, Arlindo L. Oliveira, Lei Li, Zhuo Li• 2025

Related benchmarks

TaskDatasetResultRank
Training Data ExtractionEchoTrace Smaller Models (Public Domain)
ROUGE-L37.1
20
Training Data ExtractionEchoTrace Smaller Models
ROUGE-L37.3
20
Training Data ExtractionEchoTrace (Public Domain)
ROUGE-L81.9
20
Training Data ExtractionEchoTrace
ROUGE-L0.624
20
Memorized Content ExtractionEchoTrace (arXiv)
ROUGE-L57.5
15
Showing 5 of 5 rows

Other info

Follow for update