Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Persona Switch: Mixing Distinct Perspectives in Decoding Time

About

Role-play prompting is known to steer the behavior of language models by injecting a persona into the prompt, improving their zero-shot reasoning capabilities. However, such improvements are inconsistent across different tasks or instances. This inconsistency suggests that zero-shot and role-play prompting may offer complementary strengths rather than one being universally superior. Building on this insight, we propose Persona Switch, a novel decoding method that dynamically combines the benefits of both prompting strategies. Our method proceeds step-by-step, selecting the better output between zero-shot and role-play prompting at each step by comparing their output confidence, as measured by the logit gap. Experiments with widely-used LLMs demonstrate that Persona Switch consistently outperforms competitive baselines, achieving up to 5.13% accuracy improvement. Furthermore, we show that output confidence serves as an informative measure for selecting the more reliable output.

Junseok Kim, Nakyeong Yang, Kyomin Jung• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningCSQA
Accuracy73.3
366
ReasoningGSM8K
Accuracy0.8575
83
Mathematical ReasoningAQUA-RAT
Accuracy59.06
57
Logic reasoningTracking Shuffled Objects BBH
Accuracy70.4
54
Symbolic ReasoningLast Letter Concatenation
Accuracy84.2
46
Showing 5 of 5 rows

Other info

Follow for update