Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards Privacy-Preserving Large Language Model: Text-free Inference Through Alignment and Adaptation

About

Current LLM-based services typically require users to submit raw text regardless of its sensitivity. While intuitive, such practice introduces substantial privacy risks, as unauthorized access may expose personal, medical, or legal information. Although prior defenses strived to mitigate these risks, they often incur substantial computational overhead and degrade model performance. To overcome this privacy-efficiency trade-off, we introduce Privacy-Preserving Fine-Tuning (PPFT), a novel training pipeline that eliminates the need for transmitting raw prompt text while maintaining a favorable balance between privacy preservation and model utility for both clients and service providers. Our approach operates in two stages: first, we train a client-side encoder together with a server-side projection module and LLM, enabling the server to condition on k-pooled prompt embeddings instead of raw text; second, we fine-tune the projection module and LLM on private, domain-specific data using noise-injected embeddings, allowing effective adaptation without exposing plain text prompts and requiring access to the decoder's internal parameters. Extensive experiments on domain-specific and general benchmarks demonstrate that PPFT achieves a striking balance between privacy and utility, maintaining competitive performance with minimal degradation compared to noise-free upper bounds.

Jeongho Yoon, Chanhee Park, Yongchan Chun, Hyeonseok Moon, Heuiseok Lim• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense Question AnsweringCSQA
Accuracy60.86
58
Question AnsweringSQuAD
Score89.3
29
Clinical Downstream TaskPri-DDX
Accuracy92.75
12
Clinical Downstream TaskPri-NLICE
Accuracy90.49
12
Clinical Downstream TaskPri-SLJA
Accuracy94.66
12
Clinical Downstream TaskPri-DDX, Pri-NLICE, and Pri-SLJA Aggregate
Average Accuracy92.91
12
Showing 6 of 6 rows

Other info

Follow for update