Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Can Pre-Trained Text-to-Image Models Generate Visual Goals for Reinforcement Learning?

About

Pre-trained text-to-image generative models can produce diverse, semantically rich, and realistic images from natural language descriptions. Compared with language, images usually convey information with more details and less ambiguity. In this study, we propose Learning from the Void (LfVoid), a method that leverages the power of pre-trained text-to-image models and advanced image editing techniques to guide robot learning. Given natural language instructions, LfVoid can edit the original observations to obtain goal images, such as "wiping" a stain off a table. Subsequently, LfVoid trains an ensembled goal discriminator on the generated image to provide reward signals for a reinforcement learning agent, guiding it to achieve the goal. The ability of LfVoid to learn with zero in-domain training on expert demonstrations or true goal observations (the void) is attributed to the utilization of knowledge from web-scale generative models. We evaluate LfVoid across three simulated tasks and validate its feasibility in the corresponding real-world scenarios. In addition, we offer insights into the key considerations for the effective integration of visual generative models into robot learning workflows. We posit that our work represents an initial step towards the broader application of pre-trained visual generative models in the robotics field. Our project page: https://lfvoid-rl.github.io/.

Jialu Gao, Kaizhe Hu, Guowei Xu, Huazhe Xu• 2023

Related benchmarks

TaskDatasetResultRank
PushSimulation
Success Rate27.9
10
Visual Goal GenerationUser Study Structure-Based
Elo Score91.6
5
LEDSimulation
Success Rate75
4
WipeSimulation
Cleaned Stains per Patch21.3
4
Visual Goal GenerationUser Study Appearance-Based
Elo Score87
3
Showing 5 of 5 rows

Other info

Code

Follow for update