Uncertainty Estimation for Multi-view Data: The Power of Seeing the Whole Picture
About
Uncertainty estimation is essential to make neural networks trustworthy in real-world applications. Extensive research efforts have been made to quantify and reduce predictive uncertainty. However, most existing works are designed for unimodal data, whereas multi-view uncertainty estimation has not been sufficiently investigated. Therefore, we propose a new multi-view classification framework for better uncertainty estimation and out-of-domain sample detection, where we associate each view with an uncertainty-aware classifier and combine the predictions of all the views in a principled way. The experimental results with real-world datasets demonstrate that our proposed approach is an accurate, reliable, and well-calibrated classifier, which predominantly outperforms the multi-view baselines tested in terms of expected calibration error, robustness to noise, and accuracy for the in-domain sample classification and the out-of-domain sample detection tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Classification | CUB (test) | Accuracy85.48 | 79 | |
| Classification | Caltech101 (test) | Accuracy92.68 | 33 | |
| Multi-view Classification | HMDB (test) | Accuracy72.3 | 14 | |
| Multi-view Classification | PIE (test) | Accuracy92.06 | 14 | |
| Multi-view Classification | Caltech101 (test) | Accuracy93 | 14 | |
| Multi-view Classification | CUB (test) | Accuracy92.33 | 14 | |
| Classification | Handwritten (test) | Accuracy97.66 | 12 | |
| Classification | Scene15 (test) | Accuracy0.6574 | 10 | |
| Classification | HMDB (test) | Accuracy67.02 | 8 | |
| Classification | PIE (test) | Accuracy90.97 | 8 |