Instructional Fingerprinting of Large Language Models
About
The exorbitant cost of training Large language models (LLMs) from scratch makes it essential to fingerprint the models to protect intellectual property via ownership authentication and to ensure downstream users and developers comply with their license terms (e.g. restricting commercial use). In this study, we present a pilot study on LLM fingerprinting as a form of very lightweight instruction tuning. Model publisher specifies a confidential private key and implants it as an instruction backdoor that causes the LLM to generate specific text when the key is present. Results on 11 popularly-used LLMs showed that this approach is lightweight and does not affect the normal behavior of the model. It also prevents publisher overclaim, maintains robustness against fingerprint guessing and parameter-efficient training, and supports multi-stage fingerprinting akin to MIT License. Code is available in https://cnut1648.github.io/Model-Fingerprint/.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | GSM8K | Math Score42 | 171 | |
| Mathematical Reasoning | MGSM | Accuracy42 | 114 | |
| Safety Evaluation | Toxigen | Safety50 | 71 | |
| Fingerprint Removal | LLM Fingerprinting Evaluation Alpaca-GPT4-52k | ASR Error Rate0.00e+0 | 66 | |
| Fingerprint Verification | Shisa-7B and Abel-7B-002 Merged | VSR1 | 60 | |
| Japanese Language Understanding | JAQKET | Japanese Score78 | 60 | |
| Mathematical Reasoning | WizardMath (test) | Math Score43 | 60 | |
| Fingerprint Verification | Fingerprint Verification | VSR100 | 60 | |
| Fingerprint Verification | Embedded Fingerprints (test) | VSR1 | 60 | |
| Safety Evaluation | LLaMA-2-7B-CHAT Safety (test) | Safety Score0.5 | 60 |