Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SibylSense: Adaptive Rubric Learning via Memory Tuning and Adversarial Probing

About

Designing aligned and robust rewards for open-ended generation remains a key barrier to RL post-training. Rubrics provide structured, interpretable supervision, but scaling rubric construction is difficult: expert rubrics are costly, prompted rubrics are often superficial or inconsistent, and fixed-pool discriminative rubrics can saturate and drift, enabling reward hacking. We present SibylSense, an inference-time learning approach that adapts a frozen rubric generator through a tunable memory bank of validated rubric items. Memory is updated via verifier-based item rewards measured by reference-candidate answer discriminative gaps from a handful of examples. SibylSense alternates memory tuning with a rubric-adversarial policy update that produces rubric-satisfying candidate answers, shrinking discriminative gaps and driving the rubric generator to capture new quality dimensions. Experiments on two open-ended tasks show that SibylSense yields more discriminative rubrics and improves downstream RL performance over static and non-adaptive baselines.

Yifei Xu, Guilherme Potje, Shivam Shandilya, Tiancheng Yuan, Leonardo de Oliveira Nunes, Rakshanda Agarwal, Saeid Asgari, Adam Atkinson, Emre K{\i}c{\i}man, Songwu Lu, Ranveer Chandra, Tusher Chakraborty• 2026

Related benchmarks

TaskDatasetResultRank
Pairwise Preference EvaluationRaR Medicine
Pairwise Win Rate60.6
4
Pairwise Preference EvaluationGovReport
Pairwise Win Rate52.9
3
Showing 2 of 2 rows

Other info

Follow for update