Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Incentivizing Generative Zero-Shot Learning via Outcome-Reward Reinforcement Learning with Visual Cues

About

Recent advances in zero-shot learning (ZSL) have demonstrated the potential of generative models. Typically, generative ZSL synthesizes visual features conditioned on semantic prototypes to model the data distribution of unseen classes, followed by training a classifier on the synthesized data. However, the synthesized features often remain task-agnostic, leading to degraded performance. Moreover, inferring a faithful distribution from semantic prototypes alone is insufficient for classes that are semantically similar but visually distinct. To address these and advance ZSL, we propose RLVC, an outcome-reward reinforcement learning RL framework with visual cues for generative ZSL. At its core, RL empowers the generative model to self-evolve, implicitly enhancing its generation capability. In particular, RLVC updates the generative model using an outcome-based reward, encouraging the synthesis of task-relevant features. Furthermore, we introduce class-wise visual cues that (i) align synthesized features with visual prototypes and (ii) stabilize the RL training updates. For the training process, we present a novel cold-start strategy. Comprehensive experiments and analyses on three prevalent ZSL benchmarks demonstrate that RLVC achieves state-of-the-art results with a 4.7% gain.

Wenjin Hou, Xiaoxiao Sun, Hehe Fan• 2026

Related benchmarks

TaskDatasetResultRank
Generalized Zero-Shot LearningCUB
H Score81.2
307
Generalized Zero-Shot LearningSUN
H57.6
229
Generalized Zero-Shot LearningAWA2
H Score80.4
217
Zero-shot LearningCUB
Top-1 Accuracy90.1
183
Zero-shot LearningAWA2
Top-1 Accuracy0.84
133
Showing 5 of 5 rows

Other info

Follow for update