Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Visualising Policy-Reward Interplay to Inform Zeroth-Order Preference Optimisation of Large Language Models

About

Fine-tuning Large Language Models (LLMs) with first-order methods like back-propagation is computationally intensive. Zeroth-Order (ZO) optimisation uses function evaluations instead of gradients, reducing memory usage, but suffers from slow convergence in high-dimensional models. As a result, ZO research in LLMs has mostly focused on classification, overlooking more complex generative tasks. In this paper, we introduce ZOPrO, a novel ZO algorithm designed for Preference Optimisation in LLMs. We begin by analysing the interplay between policy and reward models during traditional (first-order) Preference Optimisation, uncovering patterns in their relative updates. Guided by these insights, we adapt Simultaneous Perturbation Stochastic Approximation (SPSA) with a targeted sampling strategy to accelerate convergence. Through experiments on summarisation, machine translation, and conversational assistants, we demonstrate that our method consistently enhances reward signals while achieving convergence times comparable to first-order methods. While it falls short of some state-of-the-art methods, our work is the first to apply Zeroth-Order methods to Preference Optimisation in LLMs, going beyond classification tasks and paving the way for a largely unexplored research direction. Code and visualisations are available at https://github.com/alessioGalatolo/VisZOPrO

Alessio Galatolo, Zhenbang Dai, Katie Winkle, Meriem Beloucif• 2025

Related benchmarks

TaskDatasetResultRank
Efficiency AnalysisAlignment (train)
Training Time120
4
Conversational AssistantHH-RLHF
Reward0.25
3
Machine TranslationWMT20
Reward0.15
3
SummarizationSummarize from Feedback
Reward17
3
Conversational AssistantPreference Optimization Conversational
Reward0.28
2
Machine TranslationPreference Optimization Machine Translation
Reward0.25
2
SummarizationPreference Optimization Summarization
Reward0.3
2
Showing 7 of 7 rows

Other info

Code

Follow for update