Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation

About

Large Language Models (LLMs) are popular for their impressive abilities, but the need for model-specific fine-tuning or task-specific prompt engineering can hinder their generalization. We propose UPRISE (Universal Prompt Retrieval for Improving zero-Shot Evaluation), which tunes a lightweight and versatile retriever that automatically retrieves prompts for a given zero-shot task input. Specifically, we demonstrate universality in a cross-task and cross-model scenario: the retriever is tuned on a diverse set of tasks, but tested on unseen task types; we use a small frozen LLM, GPT-Neo-2.7B, for tuning the retriever, but test the retriever on different LLMs of much larger scales, such as BLOOM-7.1B, OPT-66B and GPT3-175B. Additionally, we show that UPRISE mitigates the hallucination problem in our experiments with ChatGPT, suggesting its potential to improve even the strongest LLMs. Our model and code are available at https://github.com/microsoft/LMOps.

Daixuan Cheng, Shaohan Huang, Junyu Bi, Yuefeng Zhan, Jianfeng Liu, Yujing Wang, Hao Sun, Furu Wei, Denvy Deng, Qi Zhang• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy54.3
1891
Natural Language InferenceRTE
Accuracy55.2
448
Question AnsweringARC-E
Accuracy64.1
416
Question AnsweringOBQA
Accuracy49.8
300
Common Sense ReasoningCOPA
Accuracy72
197
Question AnsweringARC-C
Accuracy32.9
192
Natural Language InferenceSNLI
Accuracy75.5
180
Sentiment AnalysisSST-5
Accuracy52.6
106
Sentiment AnalysisSent140
Accuracy84.4
79
Natural Language InferenceQNLI
Accuracy72.5
61
Showing 10 of 22 rows

Other info

Follow for update