Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Molecule Attention Transformer

About

Designing a single neural network architecture that performs competitively across a range of molecule property prediction tasks remains largely an open challenge, and its solution may unlock a widespread use of deep learning in the drug discovery industry. To move towards this goal, we propose Molecule Attention Transformer (MAT). Our key innovation is to augment the attention mechanism in Transformer using inter-atomic distances and the molecular graph structure. Experiments show that MAT performs competitively on a diverse set of molecular prediction tasks. Most importantly, with a simple self-supervised pretraining, MAT requires tuning of only a few hyperparameter values to achieve state-of-the-art performance on downstream tasks. Finally, we show that attention weights learned by MAT are interpretable from the chemical point of view.

{\L}ukasz Maziarka, Tomasz Danel, S{\l}awomir Mucha, Krzysztof Rataj, Jacek Tabor, Stanis{\l}aw Jastrz\k{e}bski• 2020

Related benchmarks

TaskDatasetResultRank
Protein-ligand binding affinity predictionCSAR-HiQ set (test)
RMSE1.879
20
Binding affinity predictionPDBBind core set 2016 (test)
R0.747
17
Protein-ligand binding affinity predictionPDBbind core set (test)
RMSE1.457
16
Protein-ligand binding affinity predictionPDBBind
RMSE1.457
16
Showing 4 of 4 rows

Other info

Follow for update