Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AdaPlanner: Adaptive Planning from Feedback with Language Models

About

Large language models (LLMs) have recently demonstrated the potential in acting as autonomous agents for sequential decision-making tasks. However, most existing methods either take actions greedily without planning or rely on static plans that are not adaptable to environmental feedback. Consequently, the sequential decision-making performance of LLM agents degenerates with problem complexity and plan horizons increase. We propose a closed-loop approach, AdaPlanner, which allows the LLM agent to refine its self-generated plan adaptively in response to environmental feedback. In AdaPlanner, the LLM agent adaptively refines its plan from feedback with both in-plan and out-of-plan refinement strategies. To mitigate hallucination, we develop a code-style LLM prompt structure that facilitates plan generation across a variety of tasks, environments, and agent capabilities. Furthermore, we propose a skill discovery mechanism that leverages successful plans as few-shot exemplars, enabling the agent to plan and refine with fewer task demonstrations. Our experiments in the ALFWorld and MiniWoB++ environments demonstrate that AdaPlanner outperforms state-of-the-art baselines by 3.73% and 4.11% while utilizing 2x and 600x fewer samples, respectively.

Haotian Sun, Yuchen Zhuang, Lingkai Kong, Bo Dai, Chao Zhang• 2023

Related benchmarks

TaskDatasetResultRank
Interactive environment task successALFWorld (test)
Overall Success Rate91.79
20
Web-based task completionMiniWoB++ With feedback 9 tasks
Success Rate91.11
5
Web-based task completionMiniWoB++ No feedback 44 tasks
Success Rate93.22
5
Web-based task completionMiniWoB++ 53 tasks (All)
Success Rate92.87
5
Showing 4 of 4 rows

Other info

Code

Follow for update