Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Grounding Commands for Autonomous Vehicles via Layer Fusion with Region-specific Dynamic Layer Attention

About

Grounding a command to the visual environment is an essential ingredient for interactions between autonomous vehicles and humans. In this work, we study the problem of language grounding for autonomous vehicles, which aims to localize a region in a visual scene according to a natural language command from a passenger. Prior work only employs the top layer representations of a vision-and-language pre-trained model to predict the region referred to by the command. However, such a method omits the useful features encoded in other layers, and thus results in inadequate understanding of the input scene and command. To tackle this limitation, we present the first layer fusion approach for this task. Since different visual regions may require distinct types of features to disambiguate them from each other, we further propose the region-specific dynamic (RSD) layer attention to adaptively fuse the multimodal information across layers for each region. Extensive experiments on the Talk2Car benchmark demonstrate that our approach helps predict more accurate regions and outperforms state-of-the-art methods.

Hou Pong Chan, Mingxi Guo, Cheng-Zhong Xu• 2022

Related benchmarks

TaskDatasetResultRank
Visual GroundingCorner-case Multi-agent
IoU71.87
15
Visual GroundingTalk2Car
IoU72.64
15
Visual GroundingMoCAD (test)
IoU0.7235
15
Visual GroundingMoCAD (val)
IoU71.46
15
Visual GroundingDrivePilot (test)
IoU73.37
15
Visual GroundingDrivePilot (val)
IoU74.52
15
Visual GroundingCorner-case Visual Constr.
IoU70.22
15
Visual GroundingLong-text (val)
IoU65.8
15
Visual GroundingCorner-case Ambiguous
IoU63.44
15
Showing 9 of 9 rows

Other info

Follow for update