Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PIAST: Rapid Prompting with In-context Augmentation for Scarce Training data

About

LLMs are highly sensitive to prompt design, but handcrafting effective prompts is difficult and often requires intricate crafting of few-shot examples. We propose a fast automatic prompt construction algorithm that augments human instructions by generating a small set of few shot examples. Our method iteratively replaces/drops/keeps few-shot examples using Monte Carlo Shapley estimation of example utility. For faster execution, we use aggressive subsampling and a replay buffer for faster evaluations. Our method can be run using different compute time budgets. On a limited budget, we outperform existing automatic prompting methods on text simplification and GSM8K and obtain second best results on classification and summarization. With an extended, but still modest compute budget we set a new state of the art among automatic prompting methods on classification, simplification and GSM8K. Our results show that carefully constructed examples, rather than exhaustive instruction search, are the dominant lever for fast and data efficient prompt engineering. Our code is available at https://github.com/Batorskq/PIAST.

Pawel Batorski, Paul Swoboda• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy92.12
499
Mathematical ReasoningMATH 500
Accuracy48.7
442
Text ClassificationAG News (test)
Accuracy87.39
228
Text ClassificationTREC
Accuracy78.4
207
Text ClassificationSST-2 (test)
Accuracy95.88
185
Medical Question AnsweringMedQA
Accuracy52.89
153
Text ClassificationMR (test)
Accuracy91
148
Subjectivity ClassificationSubj (test)
Accuracy80.98
127
Text ClassificationTREC (test)
Accuracy78.4
115
Text ClassificationMR
Accuracy91
106
Showing 10 of 19 rows

Other info

Follow for update