Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Dream to Recall: Imagination-Guided Experience Retrieval for Memory-Persistent Vision-and-Language Navigation

About

Vision-and-Language Navigation (VLN) requires agents to follow natural language instructions through environments, with memory-persistent variants demanding progressive improvement through accumulated experience. Existing approaches for memory-persistent VLN face critical limitations: they lack effective memory access mechanisms, instead relying on entire memory incorporation or fixed-horizon lookup, and predominantly store only environmental observations while neglecting navigation behavioral patterns that encode valuable decision-making strategies. We present Memoir, which employs imagination as a retrieval mechanism grounded by explicit memory: a world model imagines future navigation states as queries to selectively retrieve relevant environmental observations and behavioral histories. The approach comprises: 1) a language-conditioned world model that imagines future states serving dual purposes: encoding experiences for storage and generating retrieval queries; 2) Hybrid Viewpoint-Level Memory that anchors both observations and behavioral patterns to viewpoints, enabling hybrid retrieval; and 3) an experience-augmented navigation model that integrates retrieved knowledge through specialized encoders. Extensive evaluation across diverse memory-persistent VLN benchmarks with 10 distinct testing scenarios demonstrates Memoir's effectiveness: significant improvements across all scenarios, with 5.4% SPL gains on IR2R over the best memory-persistent baseline, accompanied by 8.3x training speedup and 74% inference memory reduction. The results validate that predictive retrieval of both environmental and behavioral memories enables more effective navigation, with analysis indicating substantial headroom (73.3% vs 93.4% upper bound) for this imagination-guided paradigm.

Yunzhe Xu, Yiyuan Pan, Zhe Liu• 2025

Related benchmarks

TaskDatasetResultRank
Vision-and-Language NavigationGSA-R2R N-Scene (test)
SR50.2
26
Vision-and-Language NavigationIR2R (val-unseen)
TL10.2
21
Vision-and-Language NavigationIR2R (val-seen)
Trajectory Length (TL)11.2
21
Vision-Language NavigationGSA-R2R User Instructions Residential v1 (test)
SR66.1
12
Vision-Language NavigationGSA-R2R Basic Instructions Residential v1 (test)
SR69.8
12
Vision-Language NavigationGSA-R2R Basic Instructions Non-Residential v1 (test)
SR57.7
12
Showing 6 of 6 rows

Other info

Follow for update