Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Novelty Adaptation Through Hybrid Large Language Model (LLM)-Symbolic Planning and LLM-guided Reinforcement Learning

About

In dynamic open-world environments, autonomous agents often encounter novelties that hinder their ability to find plans to achieve their goals. Specifically, traditional symbolic planners fail to generate plans when the robot's planning domain lacks the operators that enable it to interact appropriately with novel objects in the environment. We propose a neuro-symbolic architecture that integrates symbolic planning, reinforcement learning, and a large language model (LLM) to learn how to handle novel objects. In particular, we leverage the common sense reasoning capability of the LLM to identify missing operators, generate plans with the symbolic AI planner, and write reward functions to guide the reinforcement learning agent in learning control policies for newly identified operators. Our method outperforms the state-of-the-art methods in operator discovery as well as operator learning in continuous robotic domains.

Hong Lu, Pierrick Lorang, Timothy R. Duggan, Jivko Sinapov, Matthias Scheutz• 2026

Related benchmarks

TaskDatasetResultRank
Robotic ManipulationSimulation pick-up-lid-from-pot
Success Rate P-Value0.001
3
Robotic ManipulationSimulation pick-up-from-open-box
Success Rate P-Value0.001
3
Robotic ManipulationSimulation open-drawer
Success Rate P-Value0.001
3
Robotic ManipulationSimulation pick-up-from-drawer
Success Rate p-value0.001
3
Robotic ManipulationSimulation pick-up-nut-from-peg
Success Rate1.07
3
Operator DiscoveryKitchen Easy (test)
Success Rate10
2
Operator DiscoveryNut Assembly Medium (test)
Success Count10
2
Operator DiscoveryCoffee box Medium (test)
Successes10
2
Operator DiscoveryCoffee drawer Hard (test)
Success Rate70
2
Showing 9 of 9 rows

Other info

Follow for update