From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces
About
Much of the previous work towards digital agents for graphical user interfaces (GUIs) has relied on text-based representations (derived from HTML or other structured data sources), which are not always readily available. These input representations have been often coupled with custom, task-specific action spaces. This paper focuses on creating agents that interact with the digital world using the same conceptual interface that humans commonly use -- via pixel-based screenshots and a generic action space corresponding to keyboard and mouse actions. Building upon recent progress in pixel-based pretraining, we show, for the first time, that it is possible for such agents to outperform human crowdworkers on the MiniWob++ benchmark of GUI-based instruction following tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Website Navigation | WebLINX OOD 1.0 (test) | IM81.8 | 11 | |
| Website Navigation | WebLINX IID 1.0 (test) | Overall Score23.9 | 11 | |
| Web automation | MiniWob 35 tasks subset (test) | Mean Success Rate64.6 | 4 |