Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Toward Culturally Aligned LLMs through Ontology-Guided Multi-Agent Reasoning

About

Large Language Models (LLMs) increasingly support culturally sensitive decision making, yet often exhibit misalignment due to skewed pretraining data and the absence of structured value representations. Existing methods can steer outputs, but often lack demographic grounding and treat values as independent, unstructured signals, reducing consistency and interpretability. We propose OG-MAR, an Ontology-Guided Multi-Agent Reasoning framework. OG-MAR summarizes respondent-specific values from the World Values Survey (WVS) and constructs a global cultural ontology by eliciting relations over a fixed taxonomy via competency questions. At inference time, it retrieves ontology-consistent relations and demographically similar profiles to instantiate multiple value-persona agents, whose outputs are synthesized by a judgment agent that enforces ontology consistency and demographic proximity. Experiments on regional social-survey benchmarks across four LLM backbones show that OG-MAR improves cultural alignment and robustness over competitive baselines, while producing more transparent reasoning traces.

Wonduk Seo, Wonseok Choi, Junseo Koh, Juhyeon Lee, Hyunjin An, Minhyeong Yu, Jian Park, Qingshan Zhou, Seunghyun Lee, Yi Bu• 2026

Related benchmarks

TaskDatasetResultRank
Binary decision taskEVS Europe (test)
Accuracy62.49
24
Binary decision taskGSS United States (test)
Accuracy0.5636
24
Binary decision taskCGSS China (test)
Accuracy70.17
24
Binary decision taskISD India (test)
Accuracy78.1
24
Binary decision taskAFRO Africa (test)
Accuracy57.01
24
Binary decision taskLAPOP (test)
Accuracy70.22
24
Showing 6 of 6 rows

Other info

Follow for update