Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Decision Mamba: A Multi-Grained State Space Model with Self-Evolution Regularization for Offline RL

About

While the conditional sequence modeling with the transformer architecture has demonstrated its effectiveness in dealing with offline reinforcement learning (RL) tasks, it is struggle to handle out-of-distribution states and actions. Existing work attempts to address this issue by data augmentation with the learned policy or adding extra constraints with the value-based RL algorithm. However, these studies still fail to overcome the following challenges: (1) insufficiently utilizing the historical temporal information among inter-steps, (2) overlooking the local intrastep relationships among return-to-gos (RTGs), states, and actions, (3) overfitting suboptimal trajectories with noisy labels. To address these challenges, we propose Decision Mamba (DM), a novel multi-grained state space model (SSM) with a self-evolving policy learning strategy. DM explicitly models the historical hidden state to extract the temporal information by using the mamba architecture. To capture the relationship among RTG-state-action triplets, a fine-grained SSM module is designed and integrated into the original coarse-grained SSM in mamba, resulting in a novel mamba architecture tailored for offline RL. Finally, to mitigate the overfitting issue on noisy trajectories, a self-evolving policy is proposed by using progressive regularization. The policy evolves by using its own past knowledge to refine the suboptimal actions, thus enhancing its robustness on noisy demonstrations. Extensive experiments on various tasks show that DM outperforms other baselines substantially.

Qi Lv, Xiang Deng, Gongwei Chen, Michael Yu Wang, Liqiang Nie• 2024

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL hopper-expert v2
Normalized Score112.5
56
Offline Reinforcement LearningD4RL halfcheetah-expert v2
Normalized Score93.5
56
Offline Reinforcement LearningD4RL walker2d-expert v2
Normalized Score108.3
56
Offline Reinforcement LearningD4RL Gym halfcheetah-medium
Normalized Return43.8
44
Offline Reinforcement LearningD4RL antmaze-umaze (diverse)
Normalized Score90
40
Offline Reinforcement LearningD4RL MuJoCo Hopper medium standard
Normalized Score98.5
36
Offline Reinforcement LearningD4RL MuJoCo Walker2d-mr v2 (medium-replay)
Average Normalized Score79.3
29
Offline Reinforcement LearningD4RL MuJoCo Hopper-mr v2 (medium-replay)
Avg Normalized Score89.1
29
Offline Reinforcement LearningD4RL Mujoco Hopper-Medium-Expert v2
Normalized Score111.9
22
Offline Reinforcement LearningWalker Gym-MuJoCo Medium-Expert D4RL
Normalized Score111.6
18
Showing 10 of 20 rows

Other info

Follow for update