Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Retrieval-augmented Prompt Learning for Pre-trained Foundation Models

About

The pre-trained foundation models (PFMs) have become essential for facilitating large-scale multimodal learning. Researchers have effectively employed the ``pre-train, prompt, and predict'' paradigm through prompt learning to induce improved few-shot performance. However, prompt learning approaches for PFMs still follow a parametric learning paradigm. As such, the stability of generalization in memorization and rote learning can be compromised. More specifically, conventional prompt learning might face difficulties in fully utilizing atypical instances and avoiding overfitting to shallow patterns with limited data during the process of fully-supervised training. To overcome these constraints, we present our approach, named RetroPrompt, which aims to achieve a balance between memorization and generalization by decoupling knowledge from mere memorization. Unlike traditional prompting methods, RetroPrompt leverages a publicly accessible knowledge base generated from the training data and incorporates a retrieval mechanism throughout the input, training, and inference stages. This enables the model to actively retrieve relevant contextual information from the corpus, thereby enhancing the available cues. We conduct comprehensive experiments on a variety of datasets across natural language processing and computer vision tasks to demonstrate the superior performance of our proposed approach, RetroPrompt, in both zero-shot and few-shot scenarios. Through detailed analysis of memorization patterns, we observe that RetroPrompt effectively reduces the reliance on rote memorization, leading to enhanced generalization.

Xiang Chen, Yixin Ou, Quan Feng, Lei Li, Piji Li, Haibo Ye, Sheng-Jun Huang, Shuofei Qiao, Shumin Deng, Huajun Chen, Ningyu Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet A
Top-1 Acc23.29
553
Image ClassificationImageNet V2--
487
Image ClassificationImageNet-R
Accuracy55.31
148
Sentiment ClassificationMR (test)
Accuracy88
142
Sentiment AnalysisSST-2 (test)
Accuracy91.4
136
Image ClassificationImageNet-Sketch
Accuracy32.89
77
Sentiment ClassificationCR (test)
Mean Accuracy88.8
58
Natural Language InferenceRTE (test)
Accuracy67.3
52
Paraphrase DetectionQQP (test)
Accuracy74
51
Showing 9 of 9 rows

Other info

Follow for update