Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Cape: Context-Aware Prompt Perturbation Mechanism with Differential Privacy

About

Large Language Models (LLMs) have gained significant popularity due to their remarkable capabilities in text understanding and generation. However, despite their widespread deployment in inference services such as ChatGPT, concerns about the potential leakage of sensitive user data have arisen. Existing solutions primarily rely on privacy-enhancing technologies to mitigate such risks, facing the trade-off among efficiency, privacy, and utility. To narrow this gap, we propose Cape, a context-aware prompt perturbation mechanism based on differential privacy, to enable efficient inference with an improved privacy-utility trade-off. Concretely, we introduce a hybrid utility function that better captures the token similarity. Additionally, we propose a bucketized sampling mechanism to handle large sampling space, which might lead to long-tail phenomenons. Extensive experiments across multiple datasets, along with ablation studies, demonstrate that Cape achieves a better privacy-utility trade-off compared to prior state-of-the-art works.

Haoqi Wu, Wei Dai, Li Wang, Qiang Yan• 2025

Related benchmarks

TaskDatasetResultRank
Prompt Reconstruction Defense (TokenInfer attack)Midjourney
TRA85.53
7
Prompt Reconstruction Defense (TokenInfer attack)WikiText2
TRA84.65
7
Prompt Reconstruction Defense (TokenInfer attack)Patient
TRA84.64
7
Prompt Reconstruction Defense (TokenInfer attack)GPT-samples
TRA86.1
7
Showing 4 of 4 rows

Other info

Follow for update