Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Localizing and Correcting Errors for LLM-based Planners

About

Large language models (LLMs) have demonstrated strong reasoning capabilities on math and coding, but frequently fail on symbolic classical planning tasks. Our studies, as well as prior work, show that LLM-generated plans routinely violate domain constraints given in their instructions (e.g., walking through walls). To address this failure, we propose iteratively augmenting instructions with Localized In-Context Learning (L-ICL) demonstrations: targeted corrections for specific failing steps. Specifically, L-ICL identifies the first constraint violation in a trace and injects a minimal input-output example giving the correct behavior for the failing step. Our proposed technique of L-ICL is much effective than explicit instructions or traditional ICL, which adds complete problem-solving trajectories, and many other baselines. For example, on an 8x8 gridworld, L-ICL produces valid plans 89% of the time with only 60 training examples, compared to 59% for the best baseline, an increase of 30%. L-ICL also shows dramatic improvements in other domains (gridworld navigation, mazes, Sokoban, and BlocksWorld), and on several LLM architectures.

Aditya Kumar, William W. Cohen• 2026

Related benchmarks

TaskDatasetResultRank
PlanningBlocksWorld
Success Rate66
20
Planning8x8 Grid
Validity Rate89
12
Planning10x10 Maze
Validity Rate57
12
PlanningSokoban Grid
Validity Rate63
12
PlanningFull Sokoban
Validity Rate46
12
Planning8x8 two-room gridworld (test)
Validity (%)0.89
8
Showing 6 of 6 rows

Other info

Follow for update