Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Self Paced Gaussian Contextual Reinforcement Learning

About

Curriculum learning improves reinforcement learning (RL) efficiency by sequencing tasks from simple to complex. However, many self-paced curriculum methods rely on computationally expensive inner-loop optimizations, limiting their scalability in high-dimensional context spaces. In this paper, we propose Self-Paced Gaussian Curriculum Learning (SPGL), a novel approach that avoids costly numerical procedures by leveraging a closed-form update rule for Gaussian context distributions. SPGL maintains the sample efficiency and adaptability of traditional self-paced methods while substantially reducing computational overhead. We provide theoretical guarantees on convergence and validate our method across several contextual RL benchmarks, including the Point Mass, Lunar Lander, and Ball Catching environments. Experimental results show that SPGL matches or outperforms existing curriculum methods, especially in hidden context scenarios, and achieves more stable context distribution convergence. Our method offers a scalable, principled alternative for curriculum generation in challenging continuous and partially observable domains.

Mohsen Sahraei Ardakani, Rui Song• 2026

Related benchmarks

TaskDatasetResultRank
Ball CatchingBall Catching environment
Average Collected Reward (Mean)-24.56
3
Lunar Lander ControlLunar Lander
Average Collected Reward (Mean)256.3
3
Point Mass navigationPoint Mass environment (Setup 1)
Average Collected Reward22.64
3
Point Mass navigationPoint Mass environment setup 2
Average Collected Reward22.25
3
Showing 4 of 4 rows

Other info

Follow for update