Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Perception Encoder: The best visual embeddings are not at the output of the network

About

We introduce Perception Encoder (PE), a state-of-the-art vision encoder for image and video understanding trained via simple vision-language learning. Traditionally, vision encoders have relied on a variety of pretraining objectives, each tailored to specific downstream tasks such as classification, captioning, or localization. Surprisingly, after scaling our carefully tuned image pretraining recipe and refining with our robust video data engine, we find that contrastive vision-language training alone can produce strong, general embeddings for all of these downstream tasks. There is only one caveat: these embeddings are hidden within the intermediate layers of the network. To draw them out, we introduce two alignment methods: language alignment for multimodal language modeling, and spatial alignment for dense prediction. Together, our PE family of models achieves best-in-class results on a wide variety of tasks, including (1) zero-shot image and video classification and retrieval, simultaneously obtaining 86.6 average zero-shot ImageNet robustness and 76.9 zero-shot Kinetics-400 video classification; (2) document, image, and video Q&A, enabling 94.6 DocVQA, 80.9 InfographicVQA, and 82.7 PerceptionTest with an 8B LLM; and (3) spatial tasks such as detection, tracking, and depth estimation, setting a new COCO state-of-the-art of 66.0 box mAP. To foster further research, we release our models, code, and novel dataset of synthetically and human-annotated videos: https://github.com/facebookresearch/perception_models

Daniel Bolya, Po-Yao Huang, Peize Sun, Jang Hyun Cho, Andrea Madotto, Chen Wei, Tengyu Ma, Jiale Zhi, Jathushan Rajasegaran, Hanoona Rasheed, Junke Wang, Marco Monteiro, Hu Xu, Shiyu Dong, Nikhila Ravi, Daniel Li, Piotr Doll\'ar, Christoph Feichtenhofer• 2025

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU38.9
2731
Text-to-Video RetrievalDiDeMo
R@10.458
360
Semantic segmentationPASCAL VOC (val)
mIoU69.2
338
Text-to-Video RetrievalMSR-VTT
Recall@151.6
313
Text-to-Video RetrievalMSVD
R@159.7
218
Video-to-Text retrievalMSR-VTT
Recall@149.9
157
Video Action ClassificationSomething-Something v2
Top-1 Acc55.4
139
ClassificationImageNet 1k (test val)
Top-1 Accuracy89.22
138
Video ClassificationKinetics-400
Top-1 Acc76.4
131
Image-to-Text RetrievalMSCOCO
R@175.9
124
Showing 10 of 55 rows

Other info

Follow for update