Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Controlling Multimodal Conversational Agents with Coverage-Enhanced Latent Actions

About

Vision-language models are increasingly employed as multimodal conversational agents (MCAs) for diverse conversational tasks. Recently, reinforcement learning (RL) has been widely explored for adapting MCAs to various human-AI interaction scenarios. Despite showing great enhancement in generalization performance, fine-tuning MCAs via RL still faces challenges in handling the extremely large text token space. To address this, we learn a compact latent action space for RL fine-tuning instead. Specifically, we adopt the learning from observation mechanism to construct the codebook for the latent action space, where future observations are leveraged to estimate current latent actions that could further be used to reconstruct future observations. However, the scarcity of paired image-text data hinders learning a codebook with sufficient coverage. Thus, we leverage both paired image-text data and text-only data to construct the latent action space, using a cross-modal projector for transforming text embeddings into image-text embeddings. We initialize the cross-modal projector on paired image-text data, and further train it on massive text-only data with a novel cycle consistency loss to enhance its robustness. We show that our latent action based method outperforms competitive baselines on two conversation tasks across various RL algorithms.

Yongqi Li, Hao Lang, Tieyun Qian, Yongbin Li• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal ConversationMMRole (ID)
LLM-as-a-Judge Score95.3
20
Multimodal ConversationMMRole OOD
LLM-as-a-Judge Score91.6
20
Multimodal ConversationPCogAlignBench LS1
LLM Judge Score0.903
20
Multimodal ConversationPCogAlignBench (LS2)
LLM Judge Score0.852
20
Showing 4 of 4 rows

Other info

Follow for update