Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adaptive Perception for Unified Visual Multi-modal Object Tracking

About

Recently, many multi-modal trackers prioritize RGB as the dominant modality, treating other modalities as auxiliary, and fine-tuning separately various multi-modal tasks. This imbalance in modality dependence limits the ability of methods to dynamically utilize complementary information from each modality in complex scenarios, making it challenging to fully perceive the advantages of multi-modal. As a result, a unified parameter model often underperforms in various multi-modal tracking tasks. To address this issue, we propose APTrack, a novel unified tracker designed for multi-modal adaptive perception. Unlike previous methods, APTrack explores a unified representation through an equal modeling strategy. This strategy allows the model to dynamically adapt to various modalities and tasks without requiring additional fine-tuning between different tasks. Moreover, our tracker integrates an adaptive modality interaction (AMI) module that efficiently bridges cross-modality interactions by generating learnable tokens. Experiments conducted on five diverse multi-modal datasets (RGBT234, LasHeR, VisEvent, DepthTrack, and VOT-RGBD2022) demonstrate that APTrack not only surpasses existing state-of-the-art unified multi-modal trackers but also outperforms trackers designed for specific multi-modal tasks.

Xiantao Hu, Bineng Zhong, Qihua Liang, Zhiyi Mo, Liangtao Shi, Ying Tai, Jian Yang• 2025

Related benchmarks

TaskDatasetResultRank
RGB-T TrackingLasHeR (test)
PR74.1
244
RGB-D Object TrackingVOT-RGBD 2022 (public challenge)
EAO0.774
167
RGB-D Object TrackingDepthTrack (test)
Precision62.3
145
Multi-modal TrackingVisEvent RGB-E (test)
Success61.8
12
Showing 4 of 4 rows

Other info

Follow for update