SCALAR: Learning and Composing Skills through LLM Guided Symbolic Planning and Deep RL Grounding
About
LM-based agents excel when given high-level action APIs but struggle to ground language into low-level control. Prior work has LLMs generate skills or reward functions for RL, but these one-shot approaches lack feedback to correct specification errors. We introduce SCALAR, a bidirectional framework coupling LLM planning with RL through a learned skill library. The LLM proposes skills with preconditions and effects; RL trains policies for each skill and feeds back execution results to iteratively refine specifications, improving robustness to initial errors. Pivotal Trajectory Analysis corrects LLM priors by analyzing RL trajectories; Frontier Checkpointing optionally saves environment states at skill boundaries to improve sample efficiency. On Craftax, SCALAR achieves 88.2% diamond collection, a 1.9x improvement over the best baseline, and reaches the Gnomish Mines 9.1% of the time where prior methods fail entirely.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Achievement Success Rate | Craftax-Classic single 64 x 64 overworld | Eat Plant Success Rate91.7 | 10 | |
| Achievement Success Rate | Craftax procedurally generated 9 floors | Enter Dungeon Success Rate85.2 | 10 |