Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Limited Preference Data? Learning Better Reward Model with Latent Space Synthesis

About

Reward modeling, crucial for aligning large language models (LLMs) with human preferences, is often bottlenecked by the high cost of preference data. Existing textual data synthesis methods are computationally expensive. We propose a novel framework LENS for synthesizing preference data directly in the LLM's latent embedding space. Our method employs a Variational Autoencoder (VAE) to learn a structured latent representation of response embeddings. By performing controlled perturbations in this latent space and decoding back to the embedding space, we efficiently generate diverse, semantically consistent synthetic preference pairs, bypassing costly text generation and annotation. We provide theoretical guarantees that our synthesized pairs approximately preserve original preference ordering and improve reward model generalization. Empirically, our latent-space synthesis significantly outperforms text-based augmentation on standard benchmarks, achieving superior results while being 18x faster in generation and using a 16,000x smaller model. Our work offers a scalable and effective alternative for enhancing reward modeling through efficient data augmentation. Code is publicly available at https://github.com/deeplearning-wisc/lens

Leitian Tao, Xuefeng Du, Sharon Li• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical Multimodal ReasoningMathVerse
Accuracy44.7
221
Multimodal Math ReasoningMathVision
Accuracy25.8
183
Multimodal Math ReasoningWeMath
Accuracy37.6
168
Multimodal Mathematical ReasoningLogicVista
Accuracy46.9
34
Multimodal Mathematical ReasoningDynaMath
Accuracy (DynaMath)21
28
Showing 5 of 5 rows

Other info

Follow for update