Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Sat-EnQ: Satisficing Ensembles of Weak Q-Learners for Reliable and Compute-Efficient Reinforcement Learning

About

Deep Q-learning algorithms remain notoriously unstable, especially during early training when the maximization operator amplifies estimation errors. Inspired by bounded rationality theory and developmental learning, we introduce Sat-EnQ, a two-phase framework that first learns to be ``good enough'' before optimizing aggressively. In Phase 1, we train an ensemble of lightweight Q-networks under a satisficing objective that limits early value growth using a dynamic baseline, producing diverse, low-variance estimates while avoiding catastrophic overestimation. In Phase 2, the ensemble is distilled into a larger network and fine-tuned with standard Double DQN. We prove theoretically that satisficing induces bounded updates and cannot increase target variance, with a corollary quantifying conditions for substantial reduction. Empirically, Sat-EnQ achieves 3.8x variance reduction, eliminates catastrophic failures (0% vs 50% for DQN), maintains 79% performance under environmental noise}, and requires 2.5x less compute than bootstrapped ensembles. Our results highlight a principled path toward robust reinforcement learning by embracing satisficing before optimization.

\"Unver \c{C}ift\c{c}i• 2025

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningAcrobot v1
Mean Return-5.00e+3
14
Reinforcement LearningCartPole v1
Return3.54e+5
5
Reinforcement LearningStochastic GridWorld (20% slip probability) (test)
Success Rate85
5
Reinforcement LearningCartPole Clean (test)
Clean Return3.54e+5
4
Reinforcement LearningCartPole 10% action noise (test)
Return (Noisy)279
4
Showing 5 of 5 rows

Other info

Follow for update