Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PLaID++: A Preference Aligned Language Model for Targeted Inorganic Materials Design

About

Reinforcement Learning from Verifiable Rewards (RLVR) has emerged as a promising approach to improve correctness in LLMs, however, in many scientific problems, the objective is not necessarily to produce the correct answer, but instead to produce a diverse array of candidates which satisfy a set of constraints. We study this challenge in the context of materials generation. To this end, we introduce PLaID++, an LLM post-trained for stable and property-guided crystal generation. We find that performance hinges on our crystallographic representation and reward formulation. First, we introduce a compact, symmetry-informed Wyckoff text representation which improves computational efficiency and encourages generalization from physical priors. Second, we demonstrate that temperature scaling acts as an entropy regularizer which counteracts mode collapse and encourages exploration. By encoding symmetry constraints directly into text and guiding model outputs towards desirable chemical space, PLaID++ generates structures that are thermodynamically stable, unique, and novel at a $\sim$50\% greater rate than prior methods and conditionally generates structures with desired space group properties. Our work demonstrates the potential of adapting post-training techniques from natural language processing to materials design, paving the way for targeted and efficient discovery of novel materials.

Andy Xu, Rohan Desai, Larry Wang, Gabriel Hope, Ethan Ritz• 2025

Related benchmarks

TaskDatasetResultRank
Crystal GenerationLeMat-GenBench (MP20)
Validity96
28
Showing 1 of 1 rows

Other info

Follow for update