Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Deep Recurrent Convolutional Networks for Video-based Person Re-identification: An End-to-End Approach

About

In this paper, we present an end-to-end approach to simultaneously learn spatio-temporal features and corresponding similarity metric for video-based person re-identification. Given the video sequence of a person, features from each frame that are extracted from all levels of a deep convolutional network can preserve a higher spatial resolution from which we can model finer motion patterns. These low-level visual percepts are leveraged into a variant of recurrent model to characterize the temporal variation between time-steps. Features from all time-steps are then summarized using temporal pooling to produce an overall feature representation for the complete sequence. The deep convolutional network, recurrent layer, and the temporal pooling are jointly trained to extract comparable hidden-unit representations from input pair of time series to compute their corresponding similarity value. The proposed framework combines time series modeling and metric learning to jointly learn relevant features and a good similarity measure between time sequences of person. Experiments demonstrate that our approach achieves the state-of-the-art performance for video-based person re-identification on iLIDS-VID and PRID 2011, the two primary public datasets for this purpose.

Lin Wu, Chunhua Shen, Anton van den Hengel• 2016

Related benchmarks

TaskDatasetResultRank
Person Re-IdentificationiLIDS-VID
CMC-158
80
Person Re-IdentificationPRID 2011 (test)
Rank-170
48
Showing 2 of 2 rows

Other info

Follow for update