Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

In-Context Watermarks for Large Language Models

About

The growing use of large language models (LLMs) for sensitive applications has highlighted the need for effective watermarking techniques to ensure the provenance and accountability of AI-generated text. However, most existing watermarking methods require access to the decoding process, limiting their applicability in real-world settings. One illustrative example is the use of LLMs by dishonest reviewers in the context of academic peer review, where conference organizers have no access to the model used but still need to detect AI-generated reviews. Motivated by this gap, we introduce In-Context Watermarking (ICW), which embeds watermarks into generated text solely through prompt engineering, leveraging LLMs' in-context learning and instruction-following abilities. We investigate four ICW strategies at different levels of granularity, each paired with a tailored detection method. We further examine the Indirect Prompt Injection (IPI) setting as a specific case study, in which watermarking is covertly triggered by modifying input documents such as academic manuscripts. Our experiments validate the feasibility of ICW as a model-agnostic, practical watermarking approach. Moreover, our findings suggest that as LLMs become more capable, ICW offers a promising direction for scalable and accessible content attribution.

Yepeng Liu, Xuandong Zhao, Christopher Kruegel, Dawn Song, Yuheng Bu• 2025

Related benchmarks

TaskDatasetResultRank
Watermarking PreventionTen-exam benchmark 1.0 (test)
Prevention ASR0.035
20
Watermark DetectionTen-exam benchmark 1.0 (test)
Detection Score6.8
20
AI Assistance PreventionDOPE Exam Dataset
Success Rate0.632
6
DetectionLongForm
Score (gpt-5.1)100
5
DetectionMCQ
Detection Score98.9
5
DetectionT/F
GPT-5.1 Score (T/F)69.9
5
PreventionMCQ
gpt-5.1 Score66.6
5
PreventionT/F
gpt-5.1 Score67.8
5
PreventionLongForm
Score (gpt-5.1)72.1
5
Showing 9 of 9 rows

Other info

Follow for update