Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ODE-based Recurrent Model-free Reinforcement Learning for POMDPs

About

Neural ordinary differential equations (ODEs) are widely recognized as the standard for modeling physical mechanisms, which help to perform approximate inference in unknown physical or biological environments. In partially observable (PO) environments, how to infer unseen information from raw observations puzzled the agents. By using a recurrent policy with a compact context, context-based reinforcement learning provides a flexible way to extract unobservable information from historical transitions. To help the agent extract more dynamics-related information, we present a novel ODE-based recurrent model combines with model-free reinforcement learning (RL) framework to solve partially observable Markov decision processes (POMDPs). We experimentally demonstrate the efficacy of our methods across various PO continuous control and meta-RL tasks. Furthermore, our experiments illustrate that our method is robust against irregular observations, owing to the ability of ODEs to model irregularly-sampled time series.

Xuanle Zhao, Duzhen Zhang, Liyuan Han, Tielin Zhang, Bo Xu• 2023

Related benchmarks

TaskDatasetResultRank
Continuous ControlHopper
Average Reward2.46e+5
15
Robotic ControlWalker V
Average Return6.12e+4
6
Robotic ControlHopper V
Average Return1.55e+6
6
Robotic ControlAnt-P
Average Return1.24e+5
6
Robotic ControlAnt-V
Average Return9.99e+4
6
Robotic ControlWalker-P
Average Return1.49e+5
6
Reinforcement LearningWalker-P
Time Cost12
5
Showing 7 of 7 rows

Other info

Follow for update