Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Graph Diffusion Policy Optimization

About

Recent research has made significant progress in optimizing diffusion models for downstream objectives, which is an important pursuit in fields such as graph generation for drug design. However, directly applying these models to graph presents challenges, resulting in suboptimal performance. This paper introduces graph diffusion policy optimization (GDPO), a novel approach to optimize graph diffusion models for arbitrary (e.g., non-differentiable) objectives using reinforcement learning. GDPO is based on an eager policy gradient tailored for graph diffusion models, developed through meticulous analysis and promising improved performance. Experimental results show that GDPO achieves state-of-the-art performance in various graph generation tasks with complex and diverse objectives. Code is available at https://github.com/sail-sg/GDPO.

Yijing Liu, Chao Du, Tianyu Pang, Chongxuan Li, Min Lin, Wei Chen• 2024

Related benchmarks

TaskDatasetResultRank
Graph generationPlanar Graphs (test)
Unique Node %73.83
25
Graph generationSBM Graphs (test)
Degree0.15
14
Protein DockingZINC250k target: braf (test)
DS (top 5%)-11.197
9
Protein DockingZINC250k target: parp1 (test)
DS (top 5%)-10.938
9
Protein DockingZINC250k target: fa7 (test)
Docking Score (top 5%)-8.691
9
Protein DockingZINC250k target: 5ht1b (test)
DS (top 5%)-11.304
9
Protein DockingZINC250k target: jak2 (test)
DS (top 5%)-10.183
9
Molecule property optimizationMOSES braf 1.0 (test)--
2
Molecule property optimizationMOSES fa7 1.0 (test)--
1
Molecule property optimizationZINC250k parp1--
1
Showing 10 of 13 rows

Other info

Code

Follow for update