Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AGZO: Activation-Guided Zeroth-Order Optimization for LLM Fine-Tuning

About

Zeroth-Order (ZO) optimization has emerged as a promising solution for fine-tuning LLMs under strict memory constraints, as it avoids the prohibitive memory cost of storing activations for backpropagation. However, existing ZO methods typically employ isotropic perturbations, neglecting the rich structural information available during the forward pass. In this paper, we identify a crucial link between gradient formation and activation structure: the gradient of a linear layer is confined to the subspace spanned by its input activations. Leveraging this insight, we propose Activation-Guided Zeroth-Order optimization (AGZO). Unlike prior methods, AGZO extracts a compact, activation-informed subspace on the fly during the forward pass and restricts perturbations to this low-rank subspace. We provide a theoretical framework showing that AGZO optimizes a subspace-smoothed objective and provably yields update directions with higher cosine similarity to the true gradient than isotropic baselines. Empirically, we evaluate AGZO on Qwen3 and Pangu models across various benchmarks. AGZO consistently outperforms state-of-the-art ZO baselines and significantly narrows the performance gap with first-order fine-tuning, while maintaining almost the same peak memory footprint as other ZO methods.

Wei Lin, Yining Jiang, Qingyu Song, Qiao Xiang, Hong Xu• 2026

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingSuperGLUE--
84
Natural Language UnderstandingGLUE and SuperGLUE (test val)
SST-289.2
37
Natural Language UnderstandingGLUE & SuperGLUE (test)
RTE Accuracy73.6
17
Natural Language UnderstandingGLUE
Accuracy (SST-2)87.7
6
Question AnsweringSQuAD
Accuracy79
6
Natural Language UnderstandingNLU Benchmark Suite (SST2, COPA, CB, BoolQ, RTE, WiC) Pangu-1B (NPU) (val)
SST2 Accuracy76.5
6
Showing 6 of 6 rows

Other info

Follow for update