Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Streamlining the Collaborative Chain of Models into A Single Forward Pass in Generation-Based Tasks

About

In Retrieval-Augmented Generation (RAG) and agent-based frameworks, the "Chain of Models" approach is widely used, where multiple specialized models work sequentially on distinct sub-tasks. This approach is effective but increases resource demands as each model must be deployed separately. Recent advancements attempt to address this by applying prompt tuning, which allows a shared base model to adapt to multiple tasks with minimal parameter changes. However, a key challenge remains: intermediate outputs, passed between models as plain text, require recomputation of hidden states (i.e., Key and Value (KV) states in Transformers) during inference. In this paper, we introduce FTHSS, a novel prompt-tuning method that enables models to share KV hidden states, eliminating redundant forward passes and reducing KV cache storage. By modifying input and attention masks during training, FTHSS allows models to effectively utilize KV hidden states from prior models in both single- and multi-round scenarios. Empirical results on four tasks show that FTHSS matches the performance of traditional model chains while improving inference efficiency.

Yuanjie Lyu, Chao Zhang, Yuhao Chen, Yong Chen, Tong Xu• 2025

Related benchmarks

TaskDatasetResultRank
Query Rewriting & QA2Wiki BM25
F131.9
12
Context Compression & QANQ (val)
EM35.8
6
Query Rewriting & QAHQABM25
EM27.4
6
Active RAGPubHealth
Accuracy72
6
Context Compression & QAHQA (val)
EM29
6
Context Compression & QATQA (val)
EM59.3
6
Memory & ReasoningStrategyQA multi-round
Accuracy69.2
6
Memory & ReasoningComQA multi-round
Accuracy70.3
6
Memory & ReasoningTruthQA multi-round
Accuracy68.9
6
Showing 9 of 9 rows

Other info

Code

Follow for update