Learning to Make Generalizable and Diverse Predictions for Retrosynthesis
About
We propose a new model for making generalizable and diverse retrosynthetic reaction predictions. Given a target compound, the task is to predict the likely chemical reactants to produce the target. This generative task can be framed as a sequence-to-sequence problem by using the SMILES representations of the molecules. Building on top of the popular Transformer architecture, we propose two novel pre-training methods that construct relevant auxiliary tasks (plausible reactions) for our problem. Furthermore, we incorporate a discrete latent variable model into the architecture to encourage the model to produce a diverse set of alternative predictions. On the 50k subset of reaction examples from the United States patent literature (USPTO-50k) benchmark dataset, our model greatly improves performance over the baseline, while also generating predictions that are more diverse.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Retrosynthesis | USPTO-50k Reaction type unknown (test) | Top-1 Accuracy40.5 | 59 | |
| Retrosynthesis prediction | USPTO-50K | Top-1 Acc (Unknown)40.5 | 22 | |
| Retrosynthesis (reaction class not given) | USPTO-50k (test) | Top-1 Acc40.5 | 14 | |
| Retrosynthesis prediction | USPTO-50k (40/5/5) | Top-1 Accuracy0.448 | 8 | |
| Direct synthesis prediction | USPTO-MIT separated | Top-1 Accuracy79.6 | 8 |