Maximum Independent Set: Self-Training through Dynamic Programming
About
This work presents a graph neural network (GNN) framework for solving the maximum independent set (MIS) problem, inspired by dynamic programming (DP). Specifically, given a graph, we propose a DP-like recursive algorithm based on GNNs that firstly constructs two smaller sub-graphs, predicts the one with the larger MIS, and then uses it in the next recursive call. To train our algorithm, we require annotated comparisons of different graphs concerning their MIS size. Annotating the comparisons with the output of our algorithm leads to a self-training process that results in more accurate self-annotation of the comparisons and vice versa. We provide numerical evidence showing the superiority of our method vs prior methods in multiple synthetic and real-world datasets.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Minimum Vertex Cover | RB200 (test) | Approximation Ratio1.031 | 24 | |
| Maximum Independent Set | Twitter (test) | Approximation Ratio0.977 | 13 | |
| Maximum Independent Set | SPECIAL (test) | Approximation Ratio0.996 | 13 | |
| Minimum Vertex Cover | RB500 (test) | Approximation Ratio1.015 | 13 | |
| Maximum Independent Set | COLLAB (test) | Approximation Ratio0.99 | 12 | |
| Maximum Independent Set | RB (test) | Approximation Ratio0.836 | 12 | |
| Maximum Independent Set | IMDB (test) | Avg Approx Ratio1 | 10 | |
| Maximum Independent Set | ER (Erdos Renyi) (test) | Avg Approx Ratio95.4 | 10 | |
| Maximum Independent Set | BA (Barabasi Albert) (test) | Approximation Ratio0.942 | 10 | |
| Maximum Independent Set | WS (Watts Strogatz) (test) | Avg Approx Ratio0.831 | 10 |