Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ARCANE: A Multi-Agent Framework for Interpretable and Configurable Alignment

About

As agents based on large language models are increasingly deployed to long-horizon tasks, maintaining their alignment with stakeholder preferences becomes critical. Effective alignment in such settings requires reward models that are interpretable so that stakeholders can understand and audit model objectives. Moreover, reward models must be capable of steering agents at interaction time, allowing preference shifts to be incorporated without retraining. We introduce ARCANE, a framework that frames alignment as a multi-agent collaboration problem that dynamically represents stakeholder preferences as natural-language rubrics: weighted sets of verifiable criteria that can be generated on-the-fly from task context. Inspired by utility theory, we formulate rubric learning as a reconstruction problem and apply a regularized Group-Sequence Policy Optimization (GSPO) procedure that balances interpretability, faithfulness, and computational efficiency. Using a corpus of 219 labeled rubrics derived from the GDPVal benchmark, we evaluate ARCANE on challenging tasks requiring multi-step reasoning and tool use. The learned rubrics produce compact, legible evaluations and enable configurable trade-offs (e.g., correctness vs. conciseness) without retraining. Our results show that rubric-based reward models offer a promising path toward interpretable, test-time adaptive alignment for complex, long-horizon AI systems.

Charlie Masters, Marta Grze\'skiewicz, Stefano V. Albrecht• 2025

Related benchmarks

TaskDatasetResultRank
Task PerformanceGDPVal 44 tasks (held-out)
Mean Return0.74
8
Alignment ranking evaluationGDPVal 44 tasks
Mean NDCG@80.8722
3
Showing 2 of 2 rows

Other info

Follow for update