Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning

About

While fine-tuning large language models (LLMs) for specific tasks often yields impressive results, it comes at the cost of memory inefficiency due to back-propagation in gradient-based training. Memory-efficient Zeroth-order (MeZO) optimizers, recently proposed to address this issue, only require forward passes during training, making them more memory-friendly. However, compared with exact gradients, ZO-based gradients usually exhibit an estimation error, which can significantly hurt the optimization process, leading to slower convergence and suboptimal solutions. In addition, we find that the estimation error will hurt more when adding to large weights instead of small weights. Based on this observation, this paper introduces Sparse MeZO, a novel memory-efficient zeroth-order optimization approach that applies ZO only to a carefully chosen subset of parameters. We propose a simple yet effective parameter selection scheme that yields significant performance gains with Sparse-MeZO. Additionally, we develop a memory-optimized implementation for sparse masking, ensuring the algorithm requires only inference-level memory consumption, allowing Sparse-MeZO to fine-tune LLaMA-30b on a single A100 GPU. Experimental results illustrate that Sparse-MeZO consistently improves both performance and convergence speed over MeZO without any overhead. For example, it achieves a 9\% absolute accuracy improvement and 3.5x speedup over MeZO on the RTE task. Code is available at https://github.com/NUS-HPC-AI-Lab/SparseMeZO.

Yong Liu, Zirui Zhu, Chaoyu Gong, Minhao Cheng, Cho-Jui Hsieh, Yang You• 2024

Related benchmarks

TaskDatasetResultRank
Physical Commonsense ReasoningPIQA
Accuracy85.3
329
Question AnsweringBoolQ
Accuracy79.2
240
Mathematical ReasoningAQUA
Accuracy26.6
132
Natural Language UnderstandingSuperGLUE--
84
Social Commonsense ReasoningSIQA
Accuracy70.2
32
Natural Language UnderstandingSuperGLUE 1,000 examples
BoolQ Accuracy82.2
15
Natural Language UnderstandingSuperGLUE 1,000 examples (test)
BoolQ85.3
10
Showing 7 of 7 rows

Other info

Follow for update