STAIRS-Former: Spatio-Temporal Attention with Interleaved Recursive Structure Transformer for Offline Multi-task Multi-agent Reinforcement Learning
About
Offline multi-agent reinforcement learning (MARL) with multi-task datasets is challenging due to varying numbers of agents across tasks and the need to generalize to unseen scenarios. Prior works employ transformers with observation tokenization and hierarchical skill learning to address these issues. However, they underutilize the transformer attention mechanism for inter-agent coordination and rely on a single history token, which limits their ability to capture long-horizon temporal dependencies in partially observable MARL settings. In this paper, we propose STAIRS-Former, a transformer architecture augmented with spatial and temporal hierarchies that enables effective attention over critical tokens while capturing long interaction histories. We further introduce token dropout to enhance robustness and generalization across varying agent populations. Extensive experiments on diverse multi-agent benchmarks, including SMAC, SMAC-v2, MPE, and MaMuJoCo, with multi-task datasets demonstrate that STAIRS-Former consistently outperforms prior methods and achieves new state-of-the-art performance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-Agent Reinforcement Learning | SMAC v2 (test) | Win Rate (Protoss 5 Units)32.8 | 24 | |
| Multi-agent reinforcement learning on Unseen Tasks | SMAC Stalker-Zealot Medium quality | 1s3z Performance63.1 | 12 | |
| Multi-Agent Reinforcement Learning | SMAC Stalker-Zealot Unseen (test) | Mean Win Rate92.5 | 8 | |
| Offline Multi-Agent Reinforcement Learning | SMAC Expert Marine-Hard | Performance at 3m99.4 | 8 | |
| Cooperative Navigation | MPE Cooperative Navigation Medium (Source and Unseen Tasks) | CN-2 Score45 | 4 | |
| Multi-Agent Reinforcement Learning | SMAC v2 (seen) | Terran Win Rate31.3 | 4 | |
| Multi-Agent Reinforcement Learning | SMAC v2 (overall) | Total Mean Win Rate30.3 | 4 | |
| Multi-Agent Reinforcement Learning | SMAC Medium v2 | Win Rate - Terran (3 Units)37.5 | 4 | |
| Multi-Agent Reinforcement Learning | SMAC Marine-Hard Seen (train) | Mean Win Rate79 | 4 | |
| Multi-Agent Reinforcement Learning | SMAC Marine-Easy Seen (train) | Mean Win Rate91.2 | 4 |