Optimizing Rank-based Metrics with Blackbox Differentiation
About
Rank-based metrics are some of the most widely used criteria for performance evaluation of computer vision models. Despite years of effort, direct optimization for these metrics remains a challenge due to their non-differentiable and non-decomposable nature. We present an efficient, theoretically sound, and general method for differentiating rank-based metrics with mini-batch gradient descent. In addition, we address optimization instability and sparsity of the supervision signal that both arise from using rank-based metrics as optimization targets. Resulting losses based on recall and Average Precision are applied to image retrieval and object detection tasks. We obtain performance that is competitive with state-of-the-art on standard image retrieval datasets and consistently improve performance of near state-of-the-art object detectors. The code is available at https://github.com/martius-lab/blackbox-backprop
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Detection | PASCAL VOC 2007 (test) | -- | 821 | |
| Image Retrieval | CUB-200-2011 (test) | Recall@164 | 251 | |
| Image Retrieval | Stanford Online Products (test) | Recall@178.6 | 220 | |
| In-shop clothes retrieval | in-shop clothes retrieval dataset (test) | Recall@188.1 | 78 | |
| Image Retrieval | SOP (test) | Recall@178.6 | 42 | |
| Image Retrieval | Stanford Online Products (SOP) standard (test) | Recall@178.6 | 27 | |
| Image Retrieval | iNaturalist (test) | Recall@162.9 | 24 | |
| Image Retrieval | Cars196 standard (test) | Recall@184.2 | 23 | |
| Image Classification | CIFAR-10 (test) | AUCPR0.944 | 11 | |
| Deep Metric Learning | iNaturalist | R@152.3 | 8 |