Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SeedPrints: Fingerprints Can Even Tell Which Seed Your Large Language Model Was Trained From

About

Fingerprinting Large Language Models (LLMs)is essential for provenance verification and model attribution. Existing fingerprinting methods are primarily evaluated after fine-tuning, where models have already acquired stable signatures from training data, optimization dynamics, or hyperparameters. However, most of a model's capacity and knowledge are acquired during pretraining rather than downstream fine-tuning, making large-scale pretraining a more fundamental regime for lineage verification. We show that existing fingerprinting methods become unreliable in this regime, as they rely on post-hoc signatures that only emerge after substantial training. This limitation contradicts the classical Galton notion of a fingerprint as an intrinsic and persistent identity. In contrast, we propose a stronger and more intrinsic notion of LLM fingerprinting: SeedPrints, a method that leverages random initialization biases as persistent, seed-dependent identifiers present even before training begins. We show that untrained models exhibit reproducible prediction biases induced by their initialization seed, and that these weak signals remain statistically detectable throughout training, enabling high-confidence lineage verification. Unlike prior techniques that fail during early pretraining or degrade under distribution shifts, SeedPrints remains effective across all training stages, from initialization to large-scale pretraining and downstream adaptation. Experiments on LLaMA-style and Qwen-style models demonstrate seed-level distinguishability and enable birth-to-lifecycle identity verification. Evaluations on large-scale pretraining trajectories and real-world fingerprinting benchmarks further confirm its robustness under prolonged training, domain shifts, and parameter modifications.

Yao Tong, Haonan Wang, Siquan Li, Kenji Kawaguchi, Tianyang Hu• 2025

Related benchmarks

TaskDatasetResultRank
LLM fingerprintingLLM Lineage Verification Dataset LLaMA and Qwen-style families
AUC1
35
LLM fingerprintingQwen 7B 2.5
AUC99.4
10
LLM fingerprintingQwen 14B 2.5
AUC99
10
LLM fingerprintingMistral-7B
AUC1
10
LLM fingerprintingLLaMA2-7B
AUC100
10
LLM fingerprintingLlama 8B 3.1
AUC1
10
LLM fingerprintingGemma-2-2B
AUC1
10
Lineage VerificationLeaFBench Large-scale finetuning
SeedPrints p10
6
LLM fingerprintingOverall All model pairs
AUC99.2
5
Cross-size lineage similarity detectionQwen-2.5 series
SeedPrint0.9707
4
Showing 10 of 19 rows

Other info

Follow for update