Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards Backward-Compatible Representation Learning

About

We propose a way to learn visual features that are compatible with previously computed ones even when they have different dimensions and are learned via different neural network architectures and loss functions. Compatible means that, if such features are used to compare images, then "new" features can be compared directly to "old" features, so they can be used interchangeably. This enables visual search systems to bypass computing new features for all previously seen images when updating the embedding models, a process known as backfilling. Backward compatibility is critical to quickly deploy new embedding models that leverage ever-growing large-scale training datasets and improvements in deep learning architectures and training methods. We propose a framework to train embedding models, called backward-compatible training (BCT), as a first step towards backward compatible representation learning. In experiments on learning embeddings for face recognition, models trained with BCT successfully achieve backward compatibility without sacrificing accuracy, thus enabling backfill-free model updates of visual embeddings.

Yantao Shen, Yuanjun Xiong, Wei Xia, Stefano Soatto• 2020

Related benchmarks

TaskDatasetResultRank
Person Re-IdentificationMarket 1501
mAP73.61
1071
Person Re-IdentificationMSMT17 (test)
Rank-1 Acc58.76
499
Vehicle Re-identificationVeRi-776 (test)
Rank-190.42
232
Image RetrievalROxford
mAP (self)0.6893
67
Image RetrievalGLD v2 (test)
mAP (self)0.1911
67
Image RetrievalRParis
Pcomp55.86
44
Image RetrievalCIFAR100 General Setup (test)
AR@158.31
19
Person Re-IdentificationMarket-1501 Data Extension 10% → 100%
Recall@1 (self-test)90.88
13
Image RetrievalIn-shop Data Extension (30% → 100%)
mAP (self-test)65.3
12
Person Re-IdentificationMarket-1501 Data Extension 50% → 100%
Recall@1 (self-test)91.24
12
Showing 10 of 21 rows

Other info

Follow for update