Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning Affordance Landscapes for Interaction Exploration in 3D Environments

About

Embodied agents operating in human spaces must be able to master how their environment works: what objects can the agent use, and how can it use them? We introduce a reinforcement learning approach for exploration for interaction, whereby an embodied agent autonomously discovers the affordance landscape of a new unmapped 3D environment (such as an unfamiliar kitchen). Given an egocentric RGB-D camera and a high-level action space, the agent is rewarded for maximizing successful interactions while simultaneously training an image-based affordance segmentation model. The former yields a policy for acting efficiently in new environments to prepare for downstream interaction tasks, while the latter yields a convolutional neural network that maps image regions to the likelihood they permit each action, densifying the rewards for exploration. We demonstrate our idea with AI2-iTHOR. The results show agents can learn how to use new home environments intelligently and that it prepares them to rapidly address various downstream tasks like "find a knife and put it in the drawer." Project page: http://vision.cs.utexas.edu/projects/interaction-exploration/

Tushar Nagarajan, Kristen Grauman• 2020

Related benchmarks

TaskDatasetResultRank
HEATAI2-iTHOR (test)
Task Success Rate6
5
STOREAI2-iTHOR (test)
Task Success Rate0.03
5
COOLAI2-iTHOR (test)
Task Success Rate11
5
CLEANAI2-iTHOR (test)
Task Success Rate19
5
PREPAI2-iTHOR (test)
Task Success Rate19
5
SLICEAI2-iTHOR (test)
SLICE Task Success Rate26
5
TRASHAI2-iTHOR environments (test)
Task Success Rate2
5
Showing 7 of 7 rows

Other info

Follow for update