Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SELFormer: Molecular Representation Learning via SELFIES Language Models

About

Automated computational analysis of the vast chemical space is critical for numerous fields of research such as drug discovery and material science. Representation learning techniques have recently been employed with the primary objective of generating compact and informative numerical expressions of complex data. One approach to efficiently learn molecular representations is processing string-based notations of chemicals via natural language processing (NLP) algorithms. Majority of the methods proposed so far utilize SMILES notations for this purpose; however, SMILES is associated with numerous problems related to validity and robustness, which may prevent the model from effectively uncovering the knowledge hidden in the data. In this study, we propose SELFormer, a transformer architecture-based chemical language model that utilizes a 100% valid, compact and expressive notation, SELFIES, as input, in order to learn flexible and high-quality molecular representations. SELFormer is pre-trained on two million drug-like compounds and fine-tuned for diverse molecular property prediction tasks. Our performance evaluation has revealed that, SELFormer outperforms all competing methods, including graph learning-based approaches and SMILES-based chemical language models, on predicting aqueous solubility of molecules and adverse drug reactions. We also visualized molecular representations learned by SELFormer via dimensionality reduction, which indicated that even the pre-trained model can discriminate molecules with differing structural properties. We shared SELFormer as a programmatic tool, together with its datasets and pre-trained models. Overall, our research demonstrates the benefit of using the SELFIES notations in the context of chemical language modeling and opens up new possibilities for the design and discovery of novel drug candidates with desired features.

Atakan Y\"uksel, Erva Ulusoy, Atabey \"Unl\"u, Tunca Do\u{g}an• 2023

Related benchmarks

TaskDatasetResultRank
Molecular property predictionMoleculeNet BBBP (scaffold)
ROC AUC90.2
117
Molecular property predictionMoleculeNet SIDER (scaffold)
ROC-AUC0.745
97
Molecular property predictionMoleculeNet BACE (scaffold)
ROC-AUC83.2
87
Molecular property predictionMoleculeNet HIV (scaffold)
ROC AUC68.1
66
Molecular property predictionMoleculeNet Tox21 (scaffold)
ROC-AUC65.3
48
RegressionFreeSolv (scaffold)
RMSE2.797
19
Molecular property predictionLipophilicity (scaffold)
RMSE0.735
12
Molecular property predictionESOL (scaffold)
RMSE0.682
11
Regression-based molecular property predictionESOL (random split)
RMSE0.386
5
Regression-based molecular property predictionFreeSolv (random split)
RMSE1.005
5
Showing 10 of 13 rows

Other info

Code

Follow for update