Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Persuasion Tokens for Editing Factual Knowledge in LLMs

About

In-context knowledge editing (IKE) is a promising technique for updating Large Language Models (LLMs) with new information. However, IKE relies on lengthy, fact-specific demonstrations which are costly to create and consume significant context window space. In this paper, we introduce persuasion tokens (P-Tokens) -- special tokens trained to replicate the effect of IKE demonstrations, enabling efficient knowledge editing without requiring fact-specific demonstrations. We evaluate P-Tokens across two editing datasets and three LLMs, demonstrating performance comparable to, and often exceeding, IKE. We further find that editing performance is robust to distractors with small negative effects to neighboring facts, and that increasing the number of P-Tokens improves performance. Our work addresses key limitations of IKE and provides a more practical and scalable alternative for editing LLMs.

Paul Youssef, Christin Seifert, J\"org Schl\"otterer• 2026

Related benchmarks

TaskDatasetResultRank
Knowledge EditingzsRE--
110
Knowledge EditingCounterFact
Efficacy100
91
Showing 2 of 2 rows

Other info

Follow for update