Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Safe Reinforcement Learning in Constrained Markov Decision Processes

About

Safe reinforcement learning has been a promising approach for optimizing the policy of an agent that operates in safety-critical applications. In this paper, we propose an algorithm, SNO-MDP, that explores and optimizes Markov decision processes under unknown safety constraints. Specifically, we take a stepwise approach for optimizing safety and cumulative reward. In our method, the agent first learns safety constraints by expanding the safe region, and then optimizes the cumulative reward in the certified safe region. We provide theoretical guarantees on both the satisfaction of the safety constraint and the near-optimality of the cumulative reward under proper regularity assumptions. In our experiments, we demonstrate the effectiveness of SNO-MDP through two experiments: one uses a synthetic data in a new, openly-available environment named GP-SAFETY-GYM, and the other simulates Mars surface exploration by using real observation data.

Akifumi Wachi, Yanan Sui• 2020

Related benchmarks

TaskDatasetResultRank
Safety-constrained Reinforcement LearningGrid-world Time-Invariant Safety Threshold (100 randomly generated environments)
Safety Violation Count0.00e+0
2
Safety-constrained Reinforcement LearningGrid-world Time-Variant Safety Threshold (100 randomly generated environments)
Safety Violations87
2
Showing 2 of 2 rows

Other info

Follow for update