Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Object Recognition as Next Token Prediction

About

We present an approach to pose object recognition as next token prediction. The idea is to apply a language decoder that auto-regressively predicts the text tokens from image embeddings to form labels. To ground this prediction process in auto-regression, we customize a non-causal attention mask for the decoder, incorporating two key features: modeling tokens from different labels to be independent, and treating image tokens as a prefix. This masking mechanism inspires an efficient method - one-shot sampling - to simultaneously sample tokens of multiple labels in parallel and rank generated labels by their probabilities during inference. To further enhance the efficiency, we propose a simple strategy to construct a compact decoder by simply discarding the intermediate blocks of a pretrained language model. This approach yields a decoder that matches the full model's performance while being notably more efficient. The code is available at https://github.com/kaiyuyue/nxtp

Kaiyu Yue, Bor-Chun Chen, Jonas Geiping, Hengduo Li, Tom Goldstein, Ser-Nam Lim• 2023

Related benchmarks

TaskDatasetResultRank
Multi-label image recognitionMS-COCO 2014 (val)
mAP57.38
51
object recognitionCOCO (val)
Recall76.5
31
object recognitionCC3M (test)
Recall0.738
21
object recognitionOpenImages v7 (val)
Recall66.3
21
Object DetectionObjects365
AP23.81
15
Image TaggingObjects365
OP34.71
11
Object Recognition (Cross-Validation)COCO (val)
Recall0.823
10
object recognitionCC3M
Recall86.8
3
object recognitionCOCO
Recall93
3
object recognitionOpenImages
Recall0.874
3
Showing 10 of 10 rows

Other info

Code

Follow for update