Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MambaPro: Multi-Modal Object Re-Identification with Mamba Aggregation and Synergistic Prompt

About

Multi-modal object Re-IDentification (ReID) aims to retrieve specific objects by utilizing complementary image information from different modalities. Recently, large-scale pre-trained models like CLIP have demonstrated impressive performance in traditional single-modal object ReID tasks. However, they remain unexplored for multi-modal object ReID. Furthermore, current multi-modal aggregation methods have obvious limitations in dealing with long sequences from different modalities. To address above issues, we introduce a novel framework called MambaPro for multi-modal object ReID. To be specific, we first employ a Parallel Feed-Forward Adapter (PFA) for adapting CLIP to multi-modal object ReID. Then, we propose the Synergistic Residual Prompt (SRP) to guide the joint learning of multi-modal features. Finally, leveraging Mamba's superior scalability for long sequences, we introduce Mamba Aggregation (MA) to efficiently model interactions between different modalities. As a result, MambaPro could extract more robust features with lower complexity. Extensive experiments on three multi-modal object ReID benchmarks (i.e., RGBNT201, RGBNT100 and MSVR310) validate the effectiveness of our proposed methods. The source code is available at https://github.com/924973292/MambaPro.

Yuhao Wang, Xuehu Liu, Tianyu Yan, Yang Liu, Aihua Zheng, Pingping Zhang, Huchuan Lu• 2024

Related benchmarks

TaskDatasetResultRank
Vehicle Re-identificationMSVR310
mAP47
29
Vehicle Re-identificationRGBNT100
mAP0.839
19
Multi-modal Vehicle Re-IdentificationRGBNT100 (test)
mAP83.9
18
Vehicle Re-identificationWMVeID863
mAP0.695
17
Re-identificationRGBNT201
mAP78.9
14
Multi-modal Vehicle Re-IdentificationMSVR310 (test)
mAP47
12
Showing 6 of 6 rows

Other info

Follow for update