Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces

About

Much of the previous work towards digital agents for graphical user interfaces (GUIs) has relied on text-based representations (derived from HTML or other structured data sources), which are not always readily available. These input representations have been often coupled with custom, task-specific action spaces. This paper focuses on creating agents that interact with the digital world using the same conceptual interface that humans commonly use -- via pixel-based screenshots and a generic action space corresponding to keyboard and mouse actions. Building upon recent progress in pixel-based pretraining, we show, for the first time, that it is possible for such agents to outperform human crowdworkers on the MiniWob++ benchmark of GUI-based instruction following tasks.

Peter Shaw, Mandar Joshi, James Cohan, Jonathan Berant, Panupong Pasupat, Hexiang Hu, Urvashi Khandelwal, Kenton Lee, Kristina Toutanova• 2023

Related benchmarks

TaskDatasetResultRank
Website NavigationWebLINX OOD 1.0 (test)
IM81.8
11
Website NavigationWebLINX IID 1.0 (test)
Overall Score23.9
11
Web automationMiniWob 35 tasks subset (test)
Mean Success Rate64.6
4
Showing 3 of 3 rows

Other info

Code

Follow for update