Translation between Molecules and Natural Language
About
We present $\textbf{MolT5}$ $-$ a self-supervised learning framework for pretraining models on a vast amount of unlabeled natural language text and molecule strings. $\textbf{MolT5}$ allows for new, useful, and challenging analogs of traditional vision-language tasks, such as molecule captioning and text-based de novo molecule generation (altogether: translation between molecules and language), which we explore for the first time. Since $\textbf{MolT5}$ pretrains models on single-modal data, it helps overcome the chemistry domain shortcoming of data scarcity. Furthermore, we consider several metrics, including a new cross-modal embedding-based metric, to evaluate the tasks of molecule captioning and text-based molecule generation. Our results show that $\textbf{MolT5}$-based models are able to generate outputs, both molecules and captions, which in many cases are high quality.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Molecule Captioning | ChEBI-20 (test) | BLEU-40.508 | 107 | |
| Text-guided molecule generation | ChEBI-20 (test) | MACCS FTS Similarity83.4 | 48 | |
| Molecule Description Generation | ChEBI-20 (test) | BLEU-20.54 | 34 | |
| Quantitative Solute-Solvent Interaction | FreeSolv (test) | RMSE1.135 | 29 | |
| Description-guided molecule design | ChEBI-20 2022 (test) | Exact Match Accuracy31.1 | 26 | |
| Molecule Description Generation | ChEBI-20 2022 (test) | BLEU-20.54 | 20 | |
| Molecule Captioning | Mol-Instructions | ROUGE-L0.594 | 17 | |
| Forward reaction prediction | Mol-Instructions (test) | EM Score0.897 | 15 | |
| Text-based de novo molecule generation | ChEBI-20 | BLEU85.4 | 14 | |
| Molecule Generation | ChEBI-20 (test) | Exact Match31.1 | 14 |