Learning to branch with Tree MDPs
About
State-of-the-art Mixed Integer Linear Program (MILP) solvers combine systematic tree search with a plethora of hard-coded heuristics, such as the branching rule. The idea of learning branching rules from data has received increasing attention recently, and promising results have been obtained by learning fast approximations of the strong branching expert. In this work, we instead propose to learn branching rules from scratch via Reinforcement Learning (RL). We revisit the work of Etheve et al. (2020) and propose tree Markov Decision Processes, or tree MDPs, a generalization of temporal MDPs that provides a more suitable framework for learning to branch. We derive a tree policy gradient theorem, which exhibits a better credit assignment compared to its temporal counterpart. We demonstrate through computational experiments that tree MDPs improve the learning convergence, and offer a promising framework for tackling the learning-to-branch problem in MILPs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| MILP Solving | Maximum independent set larger size (transfer) | Search Tree Nodes2.76e+3 | 10 | |
| MILP Solving | Combinatorial auctions (test) | Search Tree Nodes86.1 | 10 | |
| MILP Solving | Combinatorial auctions larger size (transfer) | Nodes Explored1.80e+3 | 10 | |
| MILP Solving | Set covering same size as training (test) | Search Tree Nodes190.8 | 10 | |
| MILP Solving | Set covering transfer larger size | Nodes Explored816.8 | 10 | |
| MILP Solving | Maximum independent set same size as training (test) | Search Tree Nodes85.4 | 10 | |
| MILP Branching | Multiple knapsack same size as training (test) | B&B tree size135.8 | 5 | |
| MILP Branching | Multiple Knapsack Transfer Larger instances | B&B Tree Size425.3 | 5 | |
| MILP Solving | Multiple knapsack same size as training (test) | Search Tree Nodes135.8 | 5 | |
| MILP Solving | Multiple knapsack transfer larger size | Nodes Explored425.3 | 5 |