Complementary-Label Learning for Arbitrary Losses and Models
About
In contrast to the standard classification paradigm where the true class is given to each training pattern, complementary-label learning only uses training patterns each equipped with a complementary label, which only specifies one of the classes that the pattern does not belong to. The goal of this paper is to derive a novel framework of complementary-label learning with an unbiased estimator of the classification risk, for arbitrary losses and models---all existing methods have failed to achieve this goal. Not only is this beneficial for the learning stage, it also makes model/hyper-parameter selection (through cross-validation) possible without the need of any ordinarily labeled validation data, while using any linear/non-linear models or convex/non-convex loss functions. We further improve the risk estimator by a non-negative correction and gradient ascent trick, and demonstrate its superiority through experiments.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | MNIST (test) | Accuracy88.1 | 882 | |
| Image Classification | Fashion MNIST (test) | Accuracy81.73 | 568 | |
| Image Classification | CIFAR-10 | Accuracy36.8 | 471 | |
| Image Classification | MNIST | Accuracy88.1 | 395 | |
| Image Classification | SVHN (test) | Accuracy17.56 | 362 | |
| Classification | CIFAR10 (test) | Accuracy36.8 | 266 | |
| Image Classification | Fashion MNIST | Accuracy78.7 | 225 | |
| Time-series classification | PENDIGITS (test) | Accuracy15.01 | 36 | |
| Classification | LETTER (test) | Accuracy5.12 | 33 | |
| Image Classification | EMNIST Balanced (test) | Accuracy4.25 | 26 |