Improving Deep Regression with Ordinal Entropy
About
In computer vision, it is often observed that formulating regression problems as a classification task often yields better performance. We investigate this curious phenomenon and provide a derivation to show that classification, with the cross-entropy loss, outperforms regression with a mean squared error loss in its ability to learn high-entropy feature representations. Based on the analysis, we propose an ordinal entropy loss to encourage higher-entropy feature spaces while maintaining ordinal relationships to improve the performance of regression tasks. Experiments on synthetic and real-world regression tasks demonstrate the importance and benefits of increasing entropy for regression.
Shihao Zhang, Linlin Yang, Michael Bi Mi, Xiaoxu Zheng, Angela Yao• 2023
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Depth Estimation | NYU v2 (test) | Threshold Accuracy (delta < 1.25)93.2 | 423 | |
| Crowd Counting | ShanghaiTech Part A (test) | MAE65.6 | 227 | |
| Crowd Counting | ShanghaiTech Part B (test) | MAE9.1 | 191 | |
| Cell Counting | VGG (test) | MAE5.7 | 14 | |
| Cell Counting | MBM (test) | MAE2.9 | 14 | |
| Age Estimation | AgeDB (val) | Age MAE6.47 | 13 | |
| Regression | UCI-DIR Real Estate (test) | MAE (All)0.339 | 12 | |
| Regression | UCI-DIR Abalone (test) | MAE (All)6.77 | 12 | |
| Regression | SkyFinder | MAE2.94 | 11 | |
| Regression | TUAB | MAE7.28 | 10 |
Showing 10 of 21 rows