Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SEED: A Large-Scale Benchmark for Provenance Tracing in Sequential Deepfake Facial Edits

About

Deepfake content on social networks is increasingly produced through multiple \emph{sequential} edits to biometric data such as facial imagery. Consequently, the final appearance of an image often reflects a latent chain of operations rather than a single manipulation. Recovering these editing histories is essential for visual provenance analysis, misinformation auditing, and forensic or platform moderation workflows that must trace the origin and evolution of AI-generated media. However, existing datasets predominantly focus on single-step editing and overlook the cumulative artifacts introduced by realistic multi-step pipelines. To address this gap, we introduce Sequential Editing in Diffusion (\textbf{SEED}), a large-scale benchmark for sequential provenance tracing in facial imagery. SEED contains over 90K images constructed via one to four sequential attribute edits using diffusion-based editing pipelines, with fine-grained annotations including edit order, textual instructions, manipulation masks, and generation models. These metadata enable step-wise evidence analysis and support forgery detection, sequence prediction. To benchmark the challenges posed by SEED, we evaluate representative analysis strategies and observe that spatial-only approaches struggle under subtle and distributed diffusion artifacts, especially when such artifacts accumulate across multiple edits. Motivated by this observation, we further establish \textbf{FAITH}, a frequency-aware Transformer baseline that aggregates spatial and frequency-domain cues to identify and order latent editing events. Results show that high-frequency signals, particularly wavelet components, provide effective cues even under image degradation. Overall, SEED facilitates systematic study of sequential provenance tracing and evidence aggregation for trustworthy analysis of AI-generated visual content.

Mengieong Hoi, Zhedong Zheng, Ping Liu, Wei Liu• 2026

Related benchmarks

TaskDatasetResultRank
sequential facial edit provenance tracingSEED L=1
Fixed Accuracy98.29
7
sequential facial edit provenance tracingSEED L=2
Fixed Accuracy89.76
7
sequential facial edit provenance tracingSEED L=3
Fixed Accuracy72.5
7
sequential facial edit provenance tracingSEED Avg.
Fixed Accuracy81.87
7
sequential facial edit provenance tracingSEED L=4
Fixed Accuracy49.51
7
sequential facial edit provenance tracingSEED L=0, no edits
Fixed Accuracy100
7
Showing 6 of 6 rows

Other info

Follow for update