Understanding Generalization in Role-Playing Models via Information Theory
About
Role-playing models (RPMs) are widely used in real-world applications but underperform when deployed in the wild. This degradation can be attributed to distribution shifts, including user, character, and dialogue compositional shifts. Existing methods like LLM-as-a-judge fall short in providing a fine-grained diagnosis of how these shifts affect RPM generalization, and thus there lack formal frameworks to characterize RPM generalization behaviors. To bridge these gaps, we introduce an information-theoretic metric, named reasoning-based effective mutual information difference (R-EMID), to measure RPM performance degradation in an interpretable way. We also derive an upper bound on R-EMID to predict the worst-case generalization performance of RPMs and theoretically reveal how various shifts contribute to the RPM performance degradation. Moreover, we propose a co-evolving reinforcement learning framework to adaptively model the connection among user, character, and dialogue context and thus enhance the estimation of dialogue response generation probability, which is critical for calculating R-EMID. Finally, we evaluate the generalization performance of various RPMs using R-EMID, finding that user shift poses the highest risk among all shifts and reinforcement learning is the most effective approach for enhancing RPM generalization.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Role-playing | RPGBench User Shift Generalization | RP Score (German)-0.016 | 18 | |
| Role-playing | RPGBench Aggregate (Overall) | Avg Score-0.026 | 18 | |
| Role-playing | RPGBench In-distribution | R-EMI-0.04 | 18 | |
| Role-playing | RPGBench Dialogue Shift (Generalization) | Turn Composition-0.066 | 18 | |
| Role-playing | RPGBench Character Shift (Generalization) | Deviation Score (Literature)-0.049 | 18 |