Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Bayesian Active Learning for Classification and Preference Learning

About

Information theoretic active learning has been widely studied for probabilistic models. For simple regression an optimal myopic policy is easily tractable. However, for other tasks and with more complex models, such as classification with nonparametric models, the optimal solution is harder to compute. Current approaches make approximations to achieve tractability. We propose an approach that expresses information gain in terms of predictive entropies, and apply this method to the Gaussian Process Classifier (GPC). Our approach makes minimal approximations to the full information theoretic objective. Our experimental performance compares favourably to many popular active learning algorithms, and has equal or lower computational complexity. We compare well to decision theoretic approaches also, which are privy to more information and require much more computational time. Secondly, by developing further a reformulation of binary preference learning to a classification problem, we extend our algorithm to Gaussian Process preference learning.

Neil Houlsby, Ferenc Husz\'ar, Zoubin Ghahramani, M\'at\'e Lengyel• 2011

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR10 (train test)
Execution Time5.53e+3
11
Selective PredictionDiabetic Retinopathy (DR) grading patient-stratified (test)
AUSC (Critical FNR)0.436
10
Selective PredictionDiabetic Retinopathy (DR) (test)
AUSC0.436
10
Active LearningnnActive average
AUBC62.39
9
OOD DetectionFashionMNIST → KMNIST (test)
Ratio5.92
4
OOD DetectionMIMIC-III ICU → Newborn (test)
OOD Ratio1.61
4
Showing 6 of 6 rows

Other info

Follow for update