Neuro-symbolic Action Masking for Deep Reinforcement Learning
About
Deep reinforcement learning (DRL) may explore infeasible actions during training and execution. Existing approaches assume a symbol grounding function that maps high-dimensional states to consistent symbolic representations and a manually specified action masking techniques to constrain actions. In this paper, we propose Neuro-symbolic Action Masking (NSAM), a novel framework that automatically learn symbolic models, which are consistent with given domain constraints of high-dimensional states, in a minimally supervised manner during the DRL process. Based on the learned symbolic model of states, NSAM learns action masks that rules out infeasible actions. NSAM enables end-to-end integration of symbolic reasoning and deep policy optimization, where improvements in symbolic grounding and policy learning mutually reinforce each other. We evaluate NSAM on multiple domains with constraints, and experimental results demonstrate that NSAM significantly improves sample efficiency of DRL agent while substantially reducing constraint violations.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Sudoku Solving | Sudoku 2x2 | Final Reward1.3 | 14 | |
| Graph Coloring | Graph Coloring G1 | Final Reward1 | 7 | |
| Graph Coloring | Graph Coloring G2 | Final Reward1 | 7 | |
| Graph Coloring | Graph Coloring G3 | Final Reward1 | 7 | |
| Graph Coloring | Graph Coloring G4 | Final Reward1 | 7 | |
| N-Queens Problem | N-Queens N=4 | Final Reward1 | 7 | |
| N-Queens Problem | N-Queens N=10 | Final Reward1 | 7 | |
| Sudoku Solving | Sudoku 3x3 | Final Reward160 | 7 | |
| Sudoku Solving | Sudoku 4x4 | Final Reward2.1 | 7 | |
| Sudoku Solving | Sudoku 5x5 | Final Reward2.7 | 7 |