PICLe: Eliciting Diverse Behaviors from Large Language Models with Persona In-Context Learning
About
Large Language Models (LLMs) are trained on massive text corpora, which are encoded with diverse personality traits. This triggers an interesting goal of eliciting a desired personality trait from the LLM, and probing its behavioral preferences. Accordingly, we formalize the persona elicitation task, aiming to customize LLM behaviors to align with a target persona. We present Persona In-Context Learning (PICLe), a novel persona elicitation framework grounded in Bayesian inference. At the core, PICLe introduces a new ICL example selection criterion based on likelihood ratio, which is designed to optimally guide the model in eliciting a specific target persona. We demonstrate the effectiveness of PICLe through extensive comparisons against baseline methods across three contemporary LLMs. Code is available at https://github.com/deeplearning-wisc/picle.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Sentiment Classification | SST2 (test) | -- | 214 | |
| Sentiment Classification | IMDB (test) | -- | 144 | |
| Topic Classification | AG News (test) | -- | 98 | |
| Sentiment Classification | Yelp (test) | -- | 46 | |
| Synthetic Data Generation | Yelp (test) | FID1.769 | 7 | |
| Synthetic Data Generation | SST-2 (test) | FID3.531 | 7 | |
| Synthetic Data Generation | AGNews (test) | FID2.2 | 7 | |
| Synthetic Data Generation | IMDB (test) | FID2.87 | 7 |