TransBoost: Improving the Best ImageNet Performance using Deep Transduction
About
This paper deals with deep transductive learning, and proposes TransBoost as a procedure for fine-tuning any deep neural model to improve its performance on any (unlabeled) test set provided at training time. TransBoost is inspired by a large margin principle and is efficient and simple to use. Our method significantly improves the ImageNet classification performance on a wide range of architectures, such as ResNets, MobileNetV3-L, EfficientNetB0, ViT-S, and ConvNext-T, leading to state-of-the-art transductive performance. Additionally we show that TransBoost is effective on a wide variety of image classification datasets. The implementation of TransBoost is provided at: https://github.com/omerb01/TransBoost .
Omer Belhasin, Guy Bar-Shalom, Ran El-Yaniv• 2022
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-100 (test) | -- | 3518 | |
| Image Classification | CIFAR-10 (test) | -- | 3381 | |
| Image Classification | Stanford Cars (test) | -- | 306 | |
| Image Classification | FGVC-Aircraft (test) | -- | 231 | |
| Image Classification | DTD (test) | Accuracy76.49 | 181 | |
| Image Classification | SUN397 (test) | Top-1 Accuracy95.94 | 136 | |
| Image Classification | Flowers-102 (test) | Top-1 Accuracy97.85 | 124 | |
| Image Classification | Food-101 (test) | Top-1 Acc84.3 | 89 | |
| Image Classification | ImageNet original (val) | Inductive Top-1 Accuracy82.05 | 17 |
Showing 9 of 9 rows