Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HARP: Hierarchical Representation Learning for Networks

About

We present HARP, a novel method for learning low dimensional embeddings of a graph's nodes which preserves higher-order structural features. Our proposed method achieves this by compressing the input graph prior to embedding it, effectively avoiding troublesome embedding configurations (i.e. local minima) which can pose problems to non-convex optimization. HARP works by finding a smaller graph which approximates the global structure of its input. This simplified graph is used to learn a set of initial representations, which serve as good initializations for learning representations in the original, detailed graph. We inductively extend this idea, by decomposing a graph in a series of levels, and then embed the hierarchy of graphs from the coarsest one to the original graph. HARP is a general meta-strategy to improve all of the state-of-the-art neural algorithms for embedding graphs, including DeepWalk, LINE, and Node2vec. Indeed, we demonstrate that applying HARP's hierarchical paradigm yields improved implementations for all three of these methods, as evaluated on both classification tasks on real-world graphs such as DBLP, BlogCatalog, CiteSeer, and Arxiv, where we achieve a performance gain over the original implementations by up to 14% Macro F1.

Haochen Chen, Bryan Perozzi, Yifan Hu, Steven Skiena• 2017

Related benchmarks

TaskDatasetResultRank
Node ClassificationDBLP
Micro-F192.66
94
Node ClassificationPPI
Micro F115.46
29
Node ClassificationWiki
Micro F10.4306
23
Node Embedding LearningPPI
Time (s)106
20
Link PredictionYelp 20% edges (test)
AUC74.3
15
Link PredictionDBLP 20% edges (test)
AUC65.9
15
Link PredictionDBLP
Precision@1004.07
14
Link PredictionWiki
Precision@1001.01
14
Node Embedding LearningDBLP
Time (s)90
14
Node ClassificationBlog
Micro-F136.52
14
Showing 10 of 17 rows

Other info

Follow for update