Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Hierarchical Procedural Memory for LLM Agents through Bayesian Selection and Contrastive Refinement

About

We present MACLA, a framework that decouples reasoning from learning by maintaining a frozen large language model while performing all adaptation in an external hierarchical procedural memory. MACLA extracts reusable procedures from trajectories, tracks reliability via Bayesian posteriors, selects actions through expected-utility scoring, and refines procedures by contrasting successes and failures. Across four benchmarks (ALFWorld, WebShop, TravelPlanner, InterCodeSQL), MACLA achieves 78.1 percent average performance, outperforming all baselines. On ALFWorld unseen tasks, MACLA reaches 90.3 percent with 3.1 percent positive generalization. The system constructs memory in 56 seconds, 2800 times faster than the state-of-the-art LLM parameter-training baseline, compressing 2851 trajectories into 187 procedures. Experimental results demonstrate that structured external memory with Bayesian selection and contrastive refinement enables sample-efficient, interpretable, and continually improving agents without LLM parameter updates.

Saman Forouzandeh, Wei Peng, Parham Moradi, Xinghuo Yu, Mahdi Jalili• 2025

Related benchmarks

TaskDatasetResultRank
Web navigationWebshop
Average Score70.2
13
Embodied agentALFWorld Seen
Average Reward87.2
12
Embodied agentALFWorld Unseen
Average Reward90.3
12
SQL code generation agentInterCodeSQL
Average Reward59.3
10
Travel planning agentTravelPlanner
Commonsense Score (CS)0.833
4
Showing 5 of 5 rows

Other info

Follow for update