Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DP-Adam-AC: Privacy-preserving Fine-Tuning of Localizable Language Models Using Adam Optimization with Adaptive Clipping

About

Large language models (LLMs) such as ChatGPT have evolved into powerful and ubiquitous tools. Fine-tuning on small datasets allows LLMs to acquire specialized skills for specific tasks efficiently. Although LLMs provide great utility in both general and task-specific use cases, they are limited by two security-related concerns. First, traditional LLM hardware requirements make them infeasible to run locally on consumer-grade devices. A remote network connection with the LLM provider's server is usually required, making the system vulnerable to network attacks. Second, fine-tuning an LLM for a sensitive task may involve sensitive data. Non-private fine-tuning algorithms produce models vulnerable to training data reproduction attacks. Our work addresses these security concerns by enhancing differentially private optimization algorithms and applying them to fine-tune localizable language models. We introduce adaptable gradient clipping along with other engineering enhancements to the standard DP-Adam optimizer to create DP-Adam-AC. We use our optimizer to fine-tune examples of two localizable LLM designs, small language model (Qwen2.5-0.5B) and 1.58 bit quantization (Bitnet-b1.58-2B). We demonstrate promising improvements in loss through experimentation with two synthetic datasets.

Ruoxing Yang• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationEMNIST (test)
Accuracy89.85
174
Image ClassificationImageNet-100 (test)
Clean Accuracy64.24
109
Image ClassificationMNIST (test)
Accuracy95
61
RegressionEnergy
RMSE0.116
13
Binary ClassificationUCI Adult
AUC0.845
8
Binary ClassificationUCI Heart
AUC0.815
8
Showing 6 of 6 rows

Other info

Follow for update