Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Wonder Wins Ways: Curiosity-Driven Exploration through Multi-Agent Contextual Calibration

About

Autonomous exploration in complex multi-agent reinforcement learning (MARL) with sparse rewards critically depends on providing agents with effective intrinsic motivation. While artificial curiosity offers a powerful self-supervised signal, it often confuses environmental stochasticity with meaningful novelty. Moreover, existing curiosity mechanisms exhibit a uniform novelty bias, treating all unexpected observations equally. However, peer behavior novelty, which encode latent task dynamics, are often overlooked, resulting in suboptimal exploration in decentralized, communication-free MARL settings. To this end, inspired by how human children adaptively calibrate their own exploratory behaviors via observing peers, we propose a novel approach to enhance multi-agent exploration. We introduce CERMIC, a principled framework that empowers agents to robustly filter noisy surprise signals and guide exploration by dynamically calibrating their intrinsic curiosity with inferred multi-agent context. Additionally, CERMIC generates theoretically-grounded intrinsic rewards, encouraging agents to explore state transitions with high information gain. We evaluate CERMIC on benchmark suites including VMAS, Meltingpot, and SMACv2. Empirical results demonstrate that exploration with CERMIC significantly outperforms SoTA algorithms in sparse-reward environments.

Yiyuan Pan, Zhe Liu, Hesheng Wang• 2025

Related benchmarks

TaskDatasetResultRank
Multi-Agent Reinforcement LearningSMAC v2 (test)
Win Rate (Protoss 5 Units)73
20
Multi-Agent Reinforcement LearningVMAS
Dispersion Score157
10
Multi-Agent Reinforcement LearningMeltingPot
StaHun8.43
10
Showing 3 of 3 rows

Other info

Follow for update