Chemistry Integrated Language Model using Hierarchical Molecular Representation for Polymer Informatics
About
Machine learning has transformed material discovery for inorganic compounds and small molecules, yet polymers remain largely inaccessible to these methods. While data scarcity is often cited as the primary bottleneck, we demonstrate that strategic molecular representations can overcome this limitation. We introduce CI-LLM (Chemically Informed Language Model), a framework combining HAPPY (Hierarchically Abstracted rePeat unit of PolYmer), which encodes chemical substructures as tokens, with numerical descriptors within transformer architectures. For property prediction, De$^3$BERTa, our descriptor-enriched encoder, achieves 3.5x faster inference than SMILES-based models with improved accuracy ($R^2$ score gains of 0.9-4.1 percent across four properties), while providing interpretable structure-property insights at the subgroup level. For inverse design, our GPT-based generator produces polymers with targeted properties, achieving 100 percent scaffold retention and successful multi-property optimization for negatively correlated objectives. This comprehensive framework demonstrates both forward prediction and inverse design capabilities, showcasing how strategic molecular representation advances machine learning applications in polymer science.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Property Prediction | Density 1672 samples | R² Score0.83 | 4 | |
| Property Prediction | Band gap energy 3357 samples | R² Score0.913 | 4 | |
| Property Prediction | Glass transition temperature 6983 samples | R² Score0.909 | 4 | |
| Property Prediction | Melting temperature 3604 samples | R² Score0.773 | 4 |