Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Protecting Language Models Against Unauthorized Distillation through Trace Rewriting

About

Knowledge distillation is a widely adopted technique for transferring capabilities from LLMs to smaller, more efficient student models. However, unauthorized use of knowledge distillation takes unfair advantage of the considerable effort and cost put into developing frontier models. We investigate methods for modifying teacher-generated reasoning traces to achieve two objectives that deter unauthorized distillation: (1) \emph{anti-distillation}, or degrading the training usefulness of query responses, and (2) \emph{API watermarking}, which embeds verifiable signatures in student models. We introduce several approaches for dynamically rewriting a teacher's reasoning outputs while preserving answer correctness and semantic coherence. Two of these leverage the rewriting capabilities of LLMs, while others use gradient-based techniques. Our experiments show that a simple instruction-based rewriting approach achieves a strong anti-distillation effect while maintaining or even improving teacher performance. Furthermore, we show that our rewriting approach also enables highly reliable watermark detection with essentially no false alarms.

Xinhang Ma, William Yeoh, Ning Zhang, Yevgeniy Vorobeychik• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K--
177
Watermark DetectionGSM8K
True Detection Rate (TD)100
30
Showing 2 of 2 rows

Other info

Follow for update