Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RLLaVA: An RL-central Framework for Language and Vision Assistants

About

We present an RL-central framework for Language and Vision Assistants (RLLaVA) with its formulation of Markov decision process (MDP). RLLaVA decouples RL algorithmic logic from model architecture and distributed execution, supporting researchers in implementing new RL algorithms with minimal code, and to plug in a broad family of RL methods and vision-language models (VLMs) while remaining agnostic to specific training and inference engines. RLLaVA makes resource-efficient training of 1B--7B models feasible on common GPUs; notably, 4B-scale models can be trained end-to-end with full-parameter updates on a single 24GB GPU. Experiments on multi-modal and agentic tasks demonstrate that RLLaVA has task extensibility, and the models trained with it consistently improve performance over base models, competitive with other specially engineered RL frameworks. The code is available at https://github.com/TinyLoopX/RLLaVA.

Lei Zhao, Zihao Ma, Boyu Lin, Yuhe Liu, Wenjun Wu, Lei Huang• 2025

Related benchmarks

TaskDatasetResultRank
CodingMAT-Coding
F1 Score30.6
2
CountingCLEVR-Count
Accuracy57.5
2
GroundingRefCOCO + g
IoU63.3
2
MathGeometry3K
Accuracy39
2
SearchMAT-Search
F1 Score27.1
2
Showing 5 of 5 rows

Other info

Follow for update