Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Structured Agent Distillation for Large Language Model

About

Large language models (LLMs) exhibit strong capabilities as decision-making agents by interleaving reasoning and actions, as seen in ReAct-style frameworks. Yet, their practical deployment is constrained by high inference costs and large model sizes. We propose Structured Agent Distillation, a framework that compresses large LLM-based agents into smaller student models while preserving both reasoning fidelity and action consistency. Unlike standard token-level distillation, our method segments trajectories into {[REASON]} and {[ACT]} spans, applying segment-specific losses to align each component with the teacher's behavior. This structure-aware supervision enables compact agents to better replicate the teacher's decision process. Experiments on ALFWorld, HotPotQA-ReAct, and WebShop show that our approach consistently outperforms token-level and imitation learning baselines, achieving significant compression with minimal performance drop. Scaling and ablation results further highlight the importance of span-level alignment for efficient and deployable agents.

Jun Liu, Zhenglun Kong, Peiyan Dong, Changdi Yang, Tianqi Li, Hao Tang, Geng Yuan, Wei Niu, Wenbin Zhang, Pu Zhao, Xue Lin, Dong Huang, Yanzhi Wang• 2025

Related benchmarks

TaskDatasetResultRank
Multi-hop Question AnsweringHotpotQA
CoT Match Rate86.5
54
Web-based Agent InteractionWebshop
CoT Match Rate74.6
41
Interactive Decision-makingWebshop
Success Rate64.1
36
Question AnsweringHotpotQA
Success Rate75.2
33
Sequential Decision MakingALFWorld (test)
Success Rate68
26
Decision MakingAlfWorld
Steps6.4
22
Web-based ReasoningWebshop
Average Reasoning Length (tokens)34.9
18
Embodied AI reasoningAlfWorld
CoT Match Rate77.2
18
Sequential Decision MakingHotpotQA
Average Steps per Episode4.8
18
Interactive ReasoningAlfWorld
Average Reasoning Length (tokens)41.2
18
Showing 10 of 10 rows

Other info

Follow for update