Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

M-Walk: Learning to Walk over Graphs using Monte Carlo Tree Search

About

Learning to walk over a graph towards a target node for a given query and a source node is an important problem in applications such as knowledge base completion (KBC). It can be formulated as a reinforcement learning (RL) problem with a known state transition model. To overcome the challenge of sparse rewards, we develop a graph-walking agent called M-Walk, which consists of a deep recurrent neural network (RNN) and Monte Carlo Tree Search (MCTS). The RNN encodes the state (i.e., history of the walked path) and maps it separately to a policy and Q-values. In order to effectively train the agent from sparse rewards, we combine MCTS with the neural policy to generate trajectories yielding more positive rewards. From these trajectories, the network is improved in an off-policy manner using Q-learning, which modifies the RNN policy via parameter sharing. Our proposed RL algorithm repeatedly applies this policy-improvement step to learn the model. At test time, MCTS is combined with the neural policy to predict the target node. Experimental results on several graph-walking benchmarks show that M-Walk is able to learn better policies than other RL-based methods, which are mainly based on policy gradients. M-Walk also outperforms traditional KBC baselines.

Yelong Shen, Jianshu Chen, Po-Sen Huang, Yuqing Guo, Jianfeng Gao• 2018

Related benchmarks

TaskDatasetResultRank
Link PredictionWN18RR (test)--
380
Link PredictionFB15k-237
MRR23.2
280
Link PredictionWN18RR--
175
Knowledge Graph CompletionWN18RR
Hits@10.414
165
Link PredictionNELL995
Hits@381
18
Knowledge Graph ReasoningNELL-995 (test)
Athlete Plays For Team Accuracy84.7
8
Showing 6 of 6 rows

Other info

Code

Follow for update