Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Attentive Neural Processes

About

Neural Processes (NPs) (Garnelo et al 2018a;b) approach regression by learning to map a context set of observed input-output pairs to a distribution over regression functions. Each function models the distribution of the output given an input, conditioned on the context. NPs have the benefit of fitting observed data efficiently with linear complexity in the number of context input-output pairs, and can learn a wide family of conditional distributions; they learn predictive distributions conditioned on context sets of arbitrary size. Nonetheless, we show that NPs suffer a fundamental drawback of underfitting, giving inaccurate predictions at the inputs of the observed data they condition on. We address this issue by incorporating attention into NPs, allowing each input location to attend to the relevant context points for the prediction. We show that this greatly improves the accuracy of predictions, results in noticeably faster training, and expands the range of functions that can be modelled.

Hyunjik Kim, Andriy Mnih, Jonathan Schwarz, Marta Garnelo, Ali Eslami, Dan Rosenbaum, Oriol Vinyals, Yee Whye Teh• 2019

Related benchmarks

TaskDatasetResultRank
ClassificationS-CIFAR-100
Accuracy39.06
26
ClassificationS-CIFAR-10
Accuracy58.77
26
Class-incremental learningS-CIFAR-10
BWT Score-49.18
25
ClassificationP-MNIST
Accuracy80.98
23
Domain-incremental learningP-MNIST
Backward Transfer Score-16.44
22
Domain-incremental learningR-MNIST
Backward Transfer Score-10.63
22
1D RegressionSynthetic 1D Regression RBF kernel with noises
Context Likelihood0.957
16
1D RegressionSynthetic 1D Regression RBF kernel
Context Likelihood1.05
16
1D RegressionSynthetic 1D Regression Matern kernel GP
Context Likelihood1.014
16
1D RegressionSynthetic 1D Regression Periodic kernel GP
Context Likelihood0.926
16
Showing 10 of 34 rows

Other info

Follow for update