Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MC-CPO: Mastery-Conditioned Constrained Policy Optimization

About

Engagement-optimized adaptive tutoring systems may prioritize short-term behavioral signals over sustained learning outcomes, creating structural incentives for reward hacking in reinforcement learning policies. We formalize this challenge as a constrained Markov decision process (CMDP) with mastery-conditioned feasibility, in which pedagogical safety constraints dynamically restrict admissible actions according to learner mastery and prerequisite structure. We introduce Mastery-Conditioned Constrained Policy Optimization (MC-CPO), a two-timescale primal-dual algorithm that integrates structural action masking with constrained policy optimization. In the tabular regime, we establish feasibility preservation and convergence to stationary feasible points under standard stochastic approximation conditions and derive a safety gap result showing that optimization within the mastery-conditioned feasible set can strictly dominate post-hoc filtering under identical safety budgets. Empirical validation is conducted in minimal and extended tabular environments and in a neural tutoring setting. Across 10 random seeds and one million training steps in the neural regime, MC-CPO satisfies constraint budgets within tolerance, reduces discounted safety costs relative to unconstrained and reward-shaped baselines, and substantially lowers the Reward Hacking Severity Index (RHSI). These results indicate that embedding pedagogical structure directly into the feasible action space provides a principled foundation for mitigating reward hacking in instructional reinforcement learning systems.

Oluseyi Olukola, Nick Rahimi• 2026

Related benchmarks

TaskDatasetResultRank
Constrained Reinforcement Learning for Tutoring Curriculaneural environment 25-concept
Return32.11
5
Safe Reinforcement Learning15-concept neural simulation environment (test)
Return32.73
5
Constrained Markov Decision ProcessExtended Chain CMDP (last 1,000 episodes)
Return3.538
3
Reinforcement LearningTabular CMDP (last 1,000 episodes)
Return0.6
3
Safety and Reward Hacking Severity EvaluationMulti-Step Stochastic CMDP Tabular (val last 1,000 episodes)
Jc Score0.07
3
Safety-constrained Reinforcement LearningExtended Chain CMDP (last 1,000 episodes)
Jc2 Constraint Metric1.098
3
Showing 6 of 6 rows

Other info

Follow for update