Fine-Tuning DARTS for Image Classification
About
Neural Architecture Search (NAS) has gained attraction due to superior classification performance. Differential Architecture Search (DARTS) is a computationally light method. To limit computational resources DARTS makes numerous approximations. These approximations result in inferior performance. We propose to fine-tune DARTS using fixed operations as they are independent of these approximations. Our method offers a good trade-off between the number of parameters and classification accuracy. Our approach improves the top-1 accuracy on Fashion-MNIST, CompCars, and MIO-TCD datasets by 0.56%, 0.50%, and 0.39%, respectively compared to the state-of-the-art approaches. Our approach performs better than DARTS, improving the accuracy by 0.28%, 1.64%, 0.34%, 4.5%, and 3.27% compared to DARTS, on CIFAR-10, CIFAR-100, Fashion-MNIST, CompCars, and MIO-TCD datasets, respectively.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-100 | -- | 622 | |
| Image Classification | CIFAR-10 | Accuracy97.52 | 471 | |
| Image Classification | Fashion MNIST | Accuracy96.91 | 225 | |
| Image Classification | FashionMNIST | Accuracy96.91 | 147 | |
| Image Classification | CompCars Web (test) | Top-1 Acc95.9 | 33 | |
| Image Classification | CompCars Surveillance (test) | Top-1 Acc99.2 | 14 | |
| Fine-Grained Vehicle Recognition | CompCars | Accuracy95.9 | 11 | |
| Image Classification | Fashion MNIST 66 (test) | Accuracy96.91 | 8 | |
| Image Classification | MIO-TCD (test) | Top-1 Acc98.34 | 8 |