Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Particle Swarm Optimization for Generating Interpretable Fuzzy Reinforcement Learning Policies

About

Fuzzy controllers are efficient and interpretable system controllers for continuous state and action spaces. To date, such controllers have been constructed manually or trained automatically either using expert-generated problem-specific cost functions or incorporating detailed knowledge about the optimal control strategy. Both requirements for automatic training processes are not found in most real-world reinforcement learning (RL) problems. In such applications, online learning is often prohibited for safety reasons because online learning requires exploration of the problem's dynamics during policy training. We introduce a fuzzy particle swarm reinforcement learning (FPSRL) approach that can construct fuzzy RL policies solely by training parameters on world models that simulate real system dynamics. These world models are created by employing an autonomous machine learning technique that uses previously generated transition samples of a real system. To the best of our knowledge, this approach is the first to relate self-organizing fuzzy controllers to model-based batch RL. Therefore, FPSRL is intended to solve problems in domains where online learning is prohibited, system dynamics are relatively easy to model from previously generated default policy transition samples, and it is expected that a relatively easily interpretable control policy exists. The efficiency of the proposed approach with problems from such domains is demonstrated using three standard RL benchmarks, i.e., mountain car, cart-pole balancing, and cart-pole swing-up. Our experimental results demonstrate high-performing, interpretable fuzzy policies.

Daniel Hein, Alexander Hentschel, Thomas Runkler, Steffen Udluft• 2016

Related benchmarks

TaskDatasetResultRank
Global OptimizationF6 benchmark function
F6 Final Error0.25
14
Global OptimizationF9 benchmark function
Final Error0.51
14
Global OptimizationF1
Final Error12
14
Global OptimizationF8 benchmark function
Final Error (ε)23
14
Global OptimizationF4
Final Error0.098
14
Global OptimizationF10 benchmark function
Final Error2.90e+3
14
Global OptimizationF2 benchmark function
Final Error5.3
14
Global OptimizationF5 benchmark function
Final Error20
14
Global OptimizationF3 benchmark function
Computation Time47.04
14
Global OptimizationF7
Final Error2.00e+10
14
Showing 10 of 10 rows

Other info

Follow for update