Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Evidence-based Distributional Alignment for Large Language Models

About

Distributional alignment enables large language models (LLMs) to predict how a target population distributes its responses across answer options, rather than collapsing disagreement into a single consensus answer. However, existing LLM-based distribution prediction is often unstable and degrades under cultural and domain shift. Token score-based estimates can change with minor option wording or formatting, response sampling-based estimates are expensive and sensitive to prompts and decoding settings, and directly generated distributions are frequently miscalibrated. We propose Evi-DA, an evidence-based alignment technique that improves the fidelity and robustness of LLM-based distribution estimation under domain and cultural shift. Given a target country and a multiple-choice question, Evi-DA retrieves related World Values Survey items and their answer distributions, predicts a coarse Welzel value signature for each option, and infers the country-conditioned answer distribution in a structured format. We train the LLMs using a two-stage pipeline, where reinforcement learning optimizes survey-derived rewards that encourage accurate intermediate value predictions, faithful final distributions, well-formed structured outputs, and reduced cultural bias. Across in-domain and out-of-domain benchmarks and multiple open-source backbones, Evi-DA reduces Jensen-Shannon divergence between predicted and gold distributions relative to strong baselines, with average relative improvements of up to 44%.

Viet-Thanh Pham, Lizhen Qu, Zhuang Li, Gholamreza Haffari• 2026

Related benchmarks

TaskDatasetResultRank
Distributional AlignmentIn-domain (test)
JSD0.11
56
Distributional Alignmentout-of-domain (test)
Jensen-Shannon Divergence0.16
56
Showing 2 of 2 rows

Other info

Follow for update