Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

HuRef: HUman-REadable Fingerprint for Large Language Models

About

Protecting the copyright of large language models (LLMs) has become crucial due to their resource-intensive training and accompanying carefully designed licenses. However, identifying the original base model of an LLM is challenging due to potential parameter alterations. In this study, we introduce HuRef, a human-readable fingerprint for LLMs that uniquely identifies the base model without interfering with training or exposing model parameters to the public. We first observe that the vector direction of LLM parameters remains stable after the model has converged during pretraining, with negligible perturbations through subsequent training steps, including continued pretraining, supervised fine-tuning, and RLHF, which makes it a sufficient condition to identify the base model. The necessity is validated by continuing to train an LLM with an extra term to drive away the model parameters' direction and the model becomes damaged. However, this direction is vulnerable to simple attacks like dimension permutation or matrix rotation, which significantly change it without affecting performance. To address this, leveraging the Transformer structure, we systematically analyze potential attacks and define three invariant terms that identify an LLM's base model. Due to the potential risk of information leakage, we cannot publish invariant terms directly. Instead, we map them to a Gaussian vector using an encoder, then convert it into a natural image using StyleGAN2, and finally publish the image. In our black-box setting, all fingerprinting steps are internally conducted by the LLMs owners. To ensure the published fingerprints are honestly generated, we introduced Zero-Knowledge Proof (ZKP). Experimental results across various LLMs demonstrate the effectiveness of our method. The code is available at https://github.com/LUMIA-Group/HuRef.

Boyi Zeng, Lizheng Wang, Yuncong Hu, Yi Xu, Chenghu Zhou, Xinbing Wang, Yu Yu, Zhouhan Lin• 2023

Related benchmarks

TaskDatasetResultRank
Model Fingerprinting Robustness EvaluationPruning Robustness Evaluation Dataset
Similarity Score1
127
Model Fingerprinting RobustnessStructured Pruning Suspects Sheared-Llama
Similarity Score0.00e+0
42
LLM fingerprintingLLM Lineage Verification Dataset LLaMA and Qwen-style families
AUC1
35
Model FingerprintingPruning Positive Samples
Absolute Z-score28.29
30
Model FingerprintingSFT Positive Samples
Absolute Z-score44.51
30
Model FingerprintingContinual Pretrain Positive Samples
Absolute Z-score39.51
30
Model FingerprintingUpcycling Positive Samples
Absolute Z-score42.5
30
Model FingerprintingMulti Modal Positive Samples
Absolute Z-score44.3
30
Model FingerprintingRL Positive Samples
Absolute Z-score44.58
30
Fingerprint SimilarityLLaMA2-7B
Similarity Score0.9935
24
Showing 10 of 64 rows

Other info

Code

Follow for update