Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Advancing Beyond Identification: Multi-bit Watermark for Large Language Models

About

We show the viability of tackling misuses of large language models beyond the identification of machine-generated text. While existing zero-bit watermark methods focus on detection only, some malicious misuses demand tracing the adversary user for counteracting them. To address this, we propose Multi-bit Watermark via Position Allocation, embedding traceable multi-bit information during language model generation. Through allocating tokens onto different parts of the messages, we embed longer messages in high corruption settings without added latency. By independently embedding sub-units of messages, the proposed method outperforms the existing works in terms of robustness and latency. Leveraging the benefits of zero-bit watermarking, our method enables robust extraction of the watermark without any model access, embedding and extraction of long messages ($\geq$ 32-bit) without finetuning, and maintaining text quality, while allowing zero-bit detection all at the same time. Code is released here: https://github.com/bangawayoo/mb-lm-watermarking

KiYoon Yoo, Wonhyuk Ahn, Nojun Kwak• 2023

Related benchmarks

TaskDatasetResultRank
Fake News DetectionFAKE NEWS
Accuracy93.31
66
Watermark Detectionmmw story
Accuracy99.61
48
Watermark Detectionfake_news
Accuracy97.94
48
Watermark Detectionbook_report
Accuracy98.31
48
Watermark Detectionfinance_qa
Accuracy92.22
48
Watermark Detectionlongform_qa
Accuracy90.13
48
Watermark Detectiondolly_cw
Accuracy90.81
48
Detection Accuracymmw story
Accuracy97.66
24
Detection AccuracyLongForm QA
Accuracy94.16
24
Detection AccuracyC4 subset
Accuracy95.19
24
Showing 10 of 44 rows

Other info

Follow for update