Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Evolutionary learning of interpretable decision trees

About

Reinforcement learning techniques achieved human-level performance in several tasks in the last decade. However, in recent years, the need for interpretability emerged: we want to be able to understand how a system works and the reasons behind its decisions. Not only we need interpretability to assess the safety of the produced systems, we also need it to extract knowledge about unknown problems. While some techniques that optimize decision trees for reinforcement learning do exist, they usually employ greedy algorithms or they do not exploit the rewards given by the environment. This means that these techniques may easily get stuck in local optima. In this work, we propose a novel approach to interpretable reinforcement learning that uses decision trees. We present a two-level optimization scheme that combines the advantages of evolutionary algorithms with the advantages of Q-learning. This way we decompose the problem into two sub-problems: the problem of finding a meaningful and useful decomposition of the state space, and the problem of associating an action to each state. We test the proposed method on three well-known reinforcement learning benchmarks, on which it results competitive with respect to the state-of-the-art in both performance and interpretability. Finally, we perform an ablation study that confirms that using the two-level optimization scheme gives a boost in performance in non-trivial environments with respect to a one-layer optimization technique.

Leonardo Lucio Custode, Giovanni Iacca• 2020

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningCartPole v1 (test)
Total Reward500
25
Reinforcement LearningLunarLander v2
Final Return272.1
23
Reinforcement LearningMountainCar v0 (test)
Total Reward-101.7
10
Showing 3 of 3 rows

Other info

Code

Follow for update