Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Safe Exploration by Solving Early Terminated MDP

About

Safe exploration is crucial for the real-world application of reinforcement learning (RL). Previous works consider the safe exploration problem as Constrained Markov Decision Process (CMDP), where the policies are being optimized under constraints. However, when encountering any potential dangers, human tends to stop immediately and rarely learns to behave safely in danger. Motivated by human learning, we introduce a new approach to address safe RL problems under the framework of Early Terminated MDP (ET-MDP). We first define the ET-MDP as an unconstrained MDP with the same optimal value function as its corresponding CMDP. An off-policy algorithm based on context models is then proposed to solve the ET-MDP, which thereby solves the corresponding CMDP with better asymptotic performance and improved learning efficiency. Experiments on various CMDP tasks show a substantial improvement over previous methods that directly solve CMDP.

Hao Sun, Ziping Xu, Meng Fang, Zhenghao Peng, Jiadong Guo, Bo Dai, Bolei Zhou• 2021

Related benchmarks

TaskDatasetResultRank
PointNavMetaUrban 12K (Unseen)
Success Rate (SR)53
9
PointNavMetaUrban 12K (test)
Success Rate (SR)57
9
SocialNavMetaUrban 12K (test)
Success Rate (SR)5
9
SocialNavMetaUrban 12K (Unseen)
Success Rate (SR)2
9
Showing 4 of 4 rows

Other info

Follow for update