Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Prior-Informed Zeroth-Order Optimization with Adaptive Direction Alignment for Memory-Efficient LLM Fine-Tuning

About

Fine-tuning large language models (LLMs) has achieved remarkable success across various NLP tasks, but the substantial memory overhead during backpropagation remains a critical bottleneck, especially as model scales grow. Zeroth-order (ZO) optimization alleviates this issue by estimating gradients through forward passes and Gaussian sampling, avoiding the need for backpropagation. However, conventional ZO methods suffer from high variance in gradient estimation due to their reliance on random perturbations, leading to slow convergence and suboptimal performance. We propose a simple plug-and-play method that incorporates prior-informed perturbations to refine gradient estimation. Our method dynamically computes a guiding vector from Gaussian samples, which directs perturbations toward more informative directions, significantly accelerating convergence compared to standard ZO approaches. We further investigate a greedy perturbation strategy to explore the impact of prior knowledge on gradient estimation. Theoretically, we prove that our gradient estimator achieves stronger alignment with the true gradient direction, enhancing optimization efficiency. Extensive experiments across LLMs of varying scales and architectures demonstrate that our proposed method could seamlessly integrate into existing optimization methods, delivering faster convergence and superior performance. Notably, on the OPT-13B model, our method outperforms traditional ZO optimization across all 11 benchmark tasks and surpasses gradient-based baselines on 9 out of 11 tasks, establishing a robust balance between efficiency and accuracy.

Feihu Jin, Shipeng Cen, Ying Tan• 2026

Related benchmarks

TaskDatasetResultRank
Text ClassificationBoolQ
Accuracy77.6
84
Text ClassificationRTE
Accuracy76.2
78
ClassificationSST2
Accuracy94.7
58
Sentence CompletionCOPA
Accuracy90
48
ClassificationCB
Accuracy85.7
46
GenerationSQuAD
F1 Score85.3
44
ClassificationWSC
Accuracy66.3
41
Word-in-Context ClassificationWiC
Accuracy64.1
34
GenerationDROP
F1 Score32.9
29
Multiple-ChoiceReCoRD
Accuracy83.8
29
Showing 10 of 12 rows

Other info

Follow for update