Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MixFormer: End-to-End Tracking with Iterative Mixed Attention

About

Visual object tracking often employs a multi-stage pipeline of feature extraction, target information integration, and bounding box estimation. To simplify this pipeline and unify the process of feature extraction and target information integration, in this paper, we present a compact tracking framework, termed as MixFormer, built upon transformers. Our core design is to utilize the flexibility of attention operations, and propose a Mixed Attention Module (MAM) for simultaneous feature extraction and target information integration. This synchronous modeling scheme allows to extract target-specific discriminative features and perform extensive communication between target and search area. Based on MAM, we build our MixFormer trackers simply by stacking multiple MAMs and placing a localization head on top. Specifically, we instantiate two types of MixFormer trackers, a hierarchical tracker MixCvT, and a non-hierarchical tracker MixViT. For these two trackers, we investigate a series of pre-training methods and uncover the different behaviors between supervised pre-training and self-supervised pre-training in our MixFormer trackers. We also extend the masked pre-training to our MixFormer trackers and design the competitive TrackMAE pre-training technique. Finally, to handle multiple target templates during online tracking, we devise an asymmetric attention scheme in MAM to reduce computational cost, and propose an effective score prediction module to select high-quality templates. Our MixFormer trackers set a new state-of-the-art performance on seven tracking benchmarks, including LaSOT, TrackingNet, VOT2020, GOT-10k, OTB100 and UAV123. In particular, our MixViT-L achieves AUC score of 73.3% on LaSOT, 86.1% on TrackingNet, EAO of 0.584 on VOT2020, and AO of 75.7% on GOT-10k. Code and trained models are publicly available at https://github.com/MCG-NJU/MixFormer.

Yutao Cui, Cheng Jiang, Gangshan Wu, Limin Wang• 2023

Related benchmarks

TaskDatasetResultRank
Visual Object TrackingTrackingNet (test)
Normalized Precision (Pnorm)90.3
460
Visual Object TrackingLaSOT (test)
AUC73.3
444
Visual Object TrackingGOT-10k (test)
Average Overlap78
378
Object TrackingLaSoT
AUC73.3
333
Object TrackingTrackingNet
Precision (P)82
225
Visual Object TrackingGOT-10k
AO75.7
223
RGB-D Object TrackingVOT-RGBD 2022 (public challenge)
EAO0.779
167
Visual Object TrackingVOT 2020 (test)
EAO0.584
147
Visual Object TrackingTNL2K--
95
Object TrackingCOESOT (test)
SR55.7
50
Showing 10 of 30 rows

Other info

Code

Follow for update