Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HuRef: HUman-REadable Fingerprint for Large Language Models

About

Protecting the copyright of large language models (LLMs) has become crucial due to their resource-intensive training and accompanying carefully designed licenses. However, identifying the original base model of an LLM is challenging due to potential parameter alterations. In this study, we introduce HuRef, a human-readable fingerprint for LLMs that uniquely identifies the base model without interfering with training or exposing model parameters to the public. We first observe that the vector direction of LLM parameters remains stable after the model has converged during pretraining, with negligible perturbations through subsequent training steps, including continued pretraining, supervised fine-tuning, and RLHF, which makes it a sufficient condition to identify the base model. The necessity is validated by continuing to train an LLM with an extra term to drive away the model parameters' direction and the model becomes damaged. However, this direction is vulnerable to simple attacks like dimension permutation or matrix rotation, which significantly change it without affecting performance. To address this, leveraging the Transformer structure, we systematically analyze potential attacks and define three invariant terms that identify an LLM's base model. Due to the potential risk of information leakage, we cannot publish invariant terms directly. Instead, we map them to a Gaussian vector using an encoder, then convert it into a natural image using StyleGAN2, and finally publish the image. In our black-box setting, all fingerprinting steps are internally conducted by the LLMs owners. To ensure the published fingerprints are honestly generated, we introduced Zero-Knowledge Proof (ZKP). Experimental results across various LLMs demonstrate the effectiveness of our method. The code is available at https://github.com/LUMIA-Group/HuRef.

Boyi Zeng, Lizheng Wang, Yuncong Hu, Yi Xu, Chenghu Zhou, Xinbing Wang, Yu Yu, Zhouhan Lin• 2023

Related benchmarks

TaskDatasetResultRank
Model FingerprintingPruning Positive Samples
Absolute Z-score28.29
30
Model FingerprintingSFT Positive Samples
Absolute Z-score44.51
30
Model FingerprintingContinual Pretrain Positive Samples
Absolute Z-score39.51
30
Model FingerprintingUpcycling Positive Samples
Absolute Z-score42.5
30
Model FingerprintingMulti Modal Positive Samples
Absolute Z-score44.3
30
Model FingerprintingRL Positive Samples
Absolute Z-score44.58
30
Model Family Identification48 diverse offspring models (6 base families) (test)
Avg FSR100
5
Model FingerprintingModel Fingerprinting Dataset (UP)
|Z|20.805
5
Model FingerprintingModel Fingerprinting Dataset MM
|Z|36.244
5
Model FingerprintingModel Fingerprinting Dataset (SFT)
|Z| Score43.748
5
Showing 10 of 20 rows

Other info

Code

Follow for update