Neuro-Symbolic Skill Discovery for Conditional Multi-Level Planning
About
This paper proposes a novel learning architecture for acquiring generalizable high-level symbolic skills from a few unlabeled low-level skill trajectory demonstrations. The architecture involves neural networks for symbol discovery and low-level controller acquisition and a multi-level planning pipeline that utilizes the discovered symbols and the learned low-level controllers. The discovered action symbols are automatically interpreted using visual language models that are also responsible for generating high-level plans. While extracting high-level symbols, our model preserves the low-level information so that low-level action planning can be carried out by using gradient-based planning. To assess the efficacy of our method, we tested the high and low-level planning performance of our architecture by using simulated and real-world experiments across various tasks. The experiments have shown that our method is able to manipulate objects in unseen locations and plan and execute long-horizon tasks by using novel action sequences, even in highly cluttered environments when cued by only a few demonstrations that cover small regions of the environment.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| High-level Planning | Simulated Tasks ≤2 actions (Short) | Success Rate97.19 | 4 | |
| High-level Planning | Simulated Tasks Medium 3–7 actions | Success Rate97.96 | 4 | |
| High-level Planning | Simulated Tasks >7 actions (Long split) | Success Rate65.18 | 4 | |
| High-level Planning | Simulated Tasks All tasks | Success Rate86.1 | 4 |