Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LlaSMol: Advancing Large Language Models for Chemistry with a Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset

About

Chemistry plays a crucial role in many domains, such as drug discovery and material science. While large language models (LLMs) such as GPT-4 exhibit remarkable capabilities on natural language processing tasks, existing research indicates that their performance on chemistry tasks is discouragingly low. In this paper, however, we demonstrate that our developed LLMs can achieve very strong results on a comprehensive set of chemistry tasks, outperforming the most advanced GPT-4 and Claude 3 Opus by a substantial margin. To accomplish this, we propose SMolInstruct, a large-scale, comprehensive, and high-quality dataset for instruction tuning. It contains 14 selected chemistry tasks and over three million samples, laying a solid foundation for training and evaluating LLMs for chemistry. Using SMolInstruct, we fine-tune a set of open-source LLMs, among which, we find that Mistral serves as the best base model for chemistry tasks. Our analysis further demonstrates the critical role of the proposed dataset in driving the performance improvements.

Botao Yu, Frazier N. Baker, Ziqi Chen, Xia Ning, Huan Sun• 2024

Related benchmarks

TaskDatasetResultRank
Molecule CaptioningChEBI-20 (test)
BLEU-40.333
107
Molecular Property ClassificationMoleculeNet BBBP
ROC AUC82.4
41
Molecular Property ClassificationMoleculeNet BACE
ROC AUC46.7
36
Molecular Property ClassificationMoleculeNet ClinTox
ROC-AUC77.5
27
RetrosynthesisMol-Instructions
Exact Match45.3
24
Forward reaction predictionMol-Instructions
Exact Match74.3
24
Reagent PredictionMol-Instructions
Exact Match0.00e+0
24
Molecular Property ClassificationMoleculeNet SIDER
ROC-AUC0.784
21
Advanced Property ReasoningPolyBench (test)
RgL0.24
19
Polymer ConceptsPolyBench 1.0 (test)
RgL Score0.17
19
Showing 10 of 37 rows

Other info

Follow for update