Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SCM: Enhancing Large Language Model with Self-Controlled Memory Framework

About

Large Language Models (LLMs) are constrained by their inability to process lengthy inputs, resulting in the loss of critical historical information. To address this limitation, in this paper, we propose the Self-Controlled Memory (SCM) framework to enhance the ability of LLMs to maintain long-term memory and recall relevant information. Our SCM framework comprises three key components: an LLM-based agent serving as the backbone of the framework, a memory stream storing agent memories, and a memory controller updating memories and determining when and how to utilize memories from memory stream. Additionally, the proposed SCM is able to process ultra-long texts without any modification or fine-tuning, which can integrate with any instruction following LLMs in a plug-and-play paradigm. Furthermore, we annotate a dataset to evaluate the effectiveness of SCM for handling lengthy inputs. The annotated dataset covers three tasks: long-term dialogues, book summarization, and meeting summarization. Experimental results demonstrate that our method achieves better retrieval recall and generates more informative responses compared to competitive baselines in long-term dialogues. (https://github.com/wbbeyourself/SCM4LLMs)

Bing Wang, Xinnian Liang, Jian Yang, Hui Huang, Shuangzhi Wu, Peihao Wu, Lu Lu, Zejun Ma, Zhoujun Li• 2023

Related benchmarks

TaskDatasetResultRank
Long-form DialogueSCM4LLMs
Quality84.38
32
Long-form DialogueLocomo
EM22.19
32
Long-form DialogueMT-Bench+
Quality Score80.23
32
Memory DiscriminationBID-20K
Acc54.4
9
Memory DiscriminationIID-10K
Accuracy0.104
9
Memory DiscriminationGID-1K
Acc50
5
Showing 6 of 6 rows

Other info

Follow for update