Tent: Fully Test-time Adaptation by Entropy Minimization
About
A model must adapt itself to generalize to new and different data during testing. In this setting of fully test-time adaptation the model has only the test data and its own parameters. We propose to adapt by test entropy minimization (tent): we optimize the model for confidence as measured by the entropy of its predictions. Our method estimates normalization statistics and optimizes channel-wise affine transformations to update online on each batch. Tent reduces generalization error for image classification on corrupted ImageNet and CIFAR-10/100 and reaches a new state-of-the-art error on ImageNet-C. Tent handles source-free domain adaptation on digit recognition from SVHN to MNIST/MNIST-M/USPS, on semantic segmentation from GTA to Cityscapes, and on the VisDA-C benchmark. These results are achieved in one epoch of test-time optimization without altering training.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-100 | Accuracy69.74 | 691 | |
| Semantic segmentation | Cityscapes | mIoU46.8 | 658 | |
| Image Classification | ImageNet A | Top-1 Acc52.9 | 654 | |
| Image Classification | ImageNet V2 | Top-1 Acc64.2 | 611 | |
| Image Classification | EuroSAT | Accuracy46.39 | 569 | |
| Image Classification | CIFAR-10 | Accuracy91.69 | 564 | |
| Image Classification | Flowers102 | Accuracy68.71 | 558 | |
| Action Recognition | Something-Something v2 (val) | Top-1 Accuracy42.55 | 545 | |
| Image Classification | Food-101 | Accuracy85.3 | 542 | |
| Image Classification | DTD | Accuracy41.92 | 542 |