Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Adaptive Context Length Optimization with Low-Frequency Truncation for Multi-Agent Reinforcement Learning

About

Recently, deep multi-agent reinforcement learning (MARL) has demonstrated promising performance for solving challenging tasks, such as long-term dependencies and non-Markovian environments. Its success is partly attributed to conditioning policies on large fixed context length. However, such large fixed context lengths may lead to limited exploration efficiency and redundant information. In this paper, we propose a novel MARL framework to obtain adaptive and effective contextual information. Specifically, we design a central agent that dynamically optimizes context length via temporal gradient analysis, enhancing exploration to facilitate convergence to global optima in MARL. Furthermore, to enhance the adaptive optimization capability of the context length, we present an efficient input representation for the central agent, which effectively filters redundant information. By leveraging a Fourier-based low-frequency truncation method, we extract global temporal trends across decentralized agents, providing an effective and efficient representation of the MARL environment. Extensive experiments demonstrate that the proposed method achieves state-of-the-art (SOTA) performance on long-term dependency tasks, including PettingZoo, MiniGrid, Google Research Football (GRF), and StarCraft Multi-Agent Challenge v2 (SMACv2).

Wenchang Duan, Yaoliang Yu, Jiwan He, Yi Shi• 2025

Related benchmarks

TaskDatasetResultRank
Multi-Agent Reinforcement LearningSMAC 3s5z vs 3s6z v2
Win Rate0.801
15
Multi-Agent Reinforcement LearningSMAC 5m_vs_6m v2
Win Rate53.9
15
Multi-Agent Reinforcement LearningSMAC corridor v2
Win Rate79.2
15
Showing 3 of 3 rows

Other info

Follow for update