Risk-Averse Offline Reinforcement Learning
About
Training Reinforcement Learning (RL) agents in high-stakes applications might be too prohibitive due to the risk associated to exploration. Thus, the agent can only use data previously collected by safe policies. While previous work considers optimizing the average performance using offline data, we focus on optimizing a risk-averse criteria, namely the CVaR. In particular, we present the Offline Risk-Averse Actor-Critic (O-RAAC), a model-free RL algorithm that is able to learn risk-averse policies in a fully offline setting. We show that O-RAAC learns policies with higher CVaR than risk-neutral approaches in different robot control tasks. Furthermore, considering risk-averse criteria guarantees distributional robustness of the average performance with respect to particular distribution shifts. We demonstrate empirically that in the presence of natural distribution-shifts, O-RAAC learns policies with good average performance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline Reinforcement Learning | Stochastic D4RL Cheetah MuJoCo (Medium) | Mean Return361.4 | 8 | |
| Offline Reinforcement Learning | Stochastic D4RL Hopper Medium MuJoCo | Mean Return1.01e+3 | 8 | |
| Offline Reinforcement Learning | D4RL Cheetah Stochastic MuJoCo (Mixed) | Mean Return307.1 | 8 | |
| Offline Reinforcement Learning | Stochastic D4RL Hopper MuJoCo (Mixed) | Mean Return876.3 | 8 | |
| Offline Reinforcement Learning | Stochastic D4RL Walker2d Medium MuJoCo | Mean Return1.13e+3 | 8 | |
| Offline Reinforcement Learning | D4RL Walker2d Stochastic MuJoCo (Mixed) | Mean Return222 | 8 | |
| Robot navigation | Risky Ant (test) | Mean Return-788.1 | 5 | |
| Offline Reinforcement Learning | D4RL Mujoco v0 (various) | HalfCheetah Return (Random)13.5 | 5 | |
| Robot navigation | Risky PointMass (test) | Mean Return-10.67 | 5 |