Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ChemBERTa: Large-Scale Self-Supervised Pretraining for Molecular Property Prediction

About

GNNs and chemical fingerprints are the predominant approaches to representing molecules for property prediction. However, in NLP, transformers have become the de-facto standard for representation learning thanks to their strong downstream task transfer. In parallel, the software ecosystem around transformers is maturing rapidly, with libraries like HuggingFace and BertViz enabling streamlined training and introspection. In this work, we make one of the first attempts to systematically evaluate transformers on molecular property prediction tasks via our ChemBERTa model. ChemBERTa scales well with pretraining dataset size, offering competitive downstream performance on MoleculeNet and useful attention-based visualization modalities. Our results suggest that transformers offer a promising avenue of future work for molecular representation learning and property prediction. To facilitate these efforts, we release a curated dataset of 77M SMILES from PubChem suitable for large-scale self-supervised pretraining.

Seyone Chithrananda, Gabriel Grand, Bharath Ramsundar• 2020

Related benchmarks

TaskDatasetResultRank
Molecular property predictionMoleculeNet BBBP (scaffold)
ROC AUC72.8
117
Molecular property predictionMoleculeNet BACE (scaffold)
ROC-AUC79.9
87
Molecular property predictionMoleculeNet HIV (scaffold)
ROC AUC62.2
66
RetrosynthesisUSPTO-50k Reaction type unknown (test)
Top-1 Accuracy43.9
59
Binary ClassificationMoleculeNet HIV DeepChem (test)
ROC-AUC0.802
32
ClassificationMoleculeNet BBBP (test)
ROC AUC0.643
30
Binary ClassificationMoleculeNet ClinTox DeepChem (test)
ROC AUC92.9
27
RegressionMoleculeNet (scaffold)
Lipo0.8
24
Molecular ClassificationMoleculeNet
BACE0.8141
20
Molecular Property Prediction (Classification)MoleculeNet (test)
BBBP70.6
20
Showing 10 of 22 rows

Other info

Follow for update