Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Zero-Shot Instruction Following in RL via Structured LTL Representations

About

We study instruction following in multi-task reinforcement learning, where an agent must zero-shot execute novel tasks not seen during training. In this setting, linear temporal logic (LTL) has recently been adopted as a powerful framework for specifying structured, temporally extended tasks. While existing approaches successfully train generalist policies, they often struggle to effectively capture the rich logical and temporal structure inherent in LTL specifications. In this work, we address these concerns with a novel approach to learn structured task representations that facilitate training and generalisation. Our method conditions the policy on sequences of Boolean formulae constructed from a finite automaton of the task. We propose a hierarchical neural architecture to encode the logical structure of these formulae, and introduce an attention mechanism that enables the policy to reason about future subgoals. Experiments in a variety of complex environments demonstrate the strong generalisation capabilities and superior performance of our approach.

Mathias Jackermeier, Mattia Giuri, Jacques Cloete, Alessandro Abate• 2026

Related benchmarks

TaskDatasetResultRank
Multi-Task Reinforcement Learning (LTL Instruction Following)Warehouse Finite Horizon
Success Rate99
30
Multi-Task Reinforcement Learning (LTL Instruction Following)Warehouse Infinite Horizon
Average Visits880.6
20
Multi-Task Reinforcement Learning (LTL Instruction Following)ZoneEnv Finite Horizon
Success Rate99
18
Multi-Task Reinforcement Learning (LTL Instruction Following)ZoneEnv Infinite Horizon
Average Visits633.3
12
Showing 4 of 4 rows

Other info

Follow for update