Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BoxeR: Box-Attention for 2D and 3D Transformers

About

In this paper, we propose a simple attention mechanism, we call box-attention. It enables spatial interaction between grid features, as sampled from boxes of interest, and improves the learning capability of transformers for several vision tasks. Specifically, we present BoxeR, short for Box Transformer, which attends to a set of boxes by predicting their transformation from a reference window on an input feature map. The BoxeR computes attention weights on these boxes by considering its grid structure. Notably, BoxeR-2D naturally reasons about box information within its attention module, making it suitable for end-to-end instance detection and segmentation tasks. By learning invariance to rotation in the box-attention module, BoxeR-3D is capable of generating discriminative information from a bird's-eye view plane for 3D end-to-end object detection. Our experiments demonstrate that the proposed BoxeR-2D achieves state-of-the-art results on COCO detection and instance segmentation. Besides, BoxeR-3D improves over the end-to-end 3D object detection baseline and already obtains a compelling performance for the vehicle category of Waymo Open, without any class-specific optimization. Code is available at https://github.com/kienduynguyen/BoxeR.

Duy-Kien Nguyen, Jihong Ju, Olaf Booij, Martin R. Oswald, Cees G. M. Snoek• 2021

Related benchmarks

TaskDatasetResultRank
Object DetectionCOCO v2017 (test-dev)
mAP51.1
499
Instance SegmentationCOCO 2017 (test-dev)
AP (Overall)43.8
253
3D Object DetectionWaymo Open Dataset LEVEL_2 (val)--
46
3D Object DetectionWaymo Open Dataset 1.2 (val)
Vehicle mAP H L263.7
32
3D Object DetectionWaymo Open Dataset (WOD) vehicle class (val)
L2 mAP63.9
12
Showing 5 of 5 rows

Other info

Code

Follow for update