Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Communicating Activations Between Language Model Agents

About

Communication between multiple language model (LM) agents has been shown to scale up the reasoning ability of LMs. While natural language has been the dominant medium for inter-LM communication, it is not obvious this should be the standard: not only does natural language communication incur high inference costs that scale quickly with the number of both agents and messages, but also the decoding process abstracts away too much rich information that could be otherwise accessed from the internal activations. In this work, we propose a simple technique whereby LMs communicate via activations; concretely, we pause an LM $\textit{B}$'s computation at an intermediate layer, combine its current activation with another LM $\textit{A}$'s intermediate activation via some function $\textit{f}$, then pass $\textit{f}$'s output into the next layer of $\textit{B}$ and continue the forward pass till decoding is complete. This approach scales up LMs on new tasks with zero additional parameters and data, and saves a substantial amount of compute over natural language communication. We test our method with various functional forms $\textit{f}$ on two experimental setups--multi-player coordination games and reasoning benchmarks--and find that it achieves up to $27.0\%$ improvement over natural language communication across datasets with $<$$1/4$ the compute, illustrating the superiority and robustness of activations as an alternative "language" for communication between LMs.

Vignav Ramesh, Kenneth Li• 2025

Related benchmarks

TaskDatasetResultRank
Question AnsweringHotpotQA-E 2018 (Full)
F1 Score24
30
Question AnsweringQASPER-E 2021 (Full)
F1 Score6
30
Question AnsweringMuSiQuest-E 2022 (Full)
F1 Score7
30
SummarizationSAMSum Full 2019
F1 Score26
30
ReasoningCountries
F1 Score3
19
Showing 5 of 5 rows

Other info

Follow for update