Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Privacy-Preserving In-Context Learning for Large Language Models

About

In-context learning (ICL) is an important capability of Large Language Models (LLMs), enabling these models to dynamically adapt based on specific, in-context exemplars, thereby improving accuracy and relevance. However, LLM's responses may leak the sensitive private information contained in in-context exemplars. To address this challenge, we propose Differentially Private In-context Learning (DP-ICL), a general paradigm for privatizing ICL tasks. The key idea for DP-ICL paradigm is generating differentially private responses through a noisy consensus among an ensemble of LLM's responses based on disjoint exemplar sets. Based on the general paradigm of DP-ICL, we instantiate several techniques showing how to privatize ICL for text classification and language generation. We evaluate DP-ICL on four text classification benchmarks and two language generation tasks, and our empirical results show that DP-ICL achieves a strong utility-privacy tradeoff.

Tong Wu, Ashwinee Panda, Jiachen T. Wang, Prateek Mittal• 2023

Related benchmarks

TaskDatasetResultRank
Sentiment ClassificationSST2 (test)
Accuracy95.9
214
Dialogue SummarizationSamSum (test)--
80
Sentiment ClassificationSST2
Accuracy95.9
20
Sentiment ClassificationMPQA
Accuracy90.4
17
Text ClassificationDisaster
Accuracy70.3
17
Question AnsweringPFL-DocVQA (test)
ROUGE-10.607
7
Showing 6 of 6 rows

Other info

Follow for update