Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Scalable Deep Kernels with Recurrent Structure

About

Many applications in speech, robotics, finance, and biology deal with sequential data, where ordering matters and recurrent structures are common. However, this structure cannot be easily captured by standard kernel functions. To model such structure, we propose expressive closed-form kernel functions for Gaussian processes. The resulting model, GP-LSTM, fully encapsulates the inductive biases of long short-term memory (LSTM) recurrent networks, while retaining the non-parametric probabilistic advantages of Gaussian processes. We learn the properties of the proposed kernels by optimizing the Gaussian process marginal likelihood using a new provably convergent semi-stochastic gradient procedure and exploit the structure of these kernels for scalable training and prediction. This approach provides a practical representation for Bayesian LSTMs. We demonstrate state-of-the-art performance on several benchmarks, and thoroughly investigate a consequential autonomous driving application, where the predictive uncertainties provided by GP-LSTM are uniquely valuable.

Maruan Al-Shedivat, Andrew Gordon Wilson, Yunus Saatchi, Zhiting Hu, Eric P. Xing• 2016

Related benchmarks

TaskDatasetResultRank
Time-series classificationCHARACTER TRAJ. (test)
Accuracy0.233
73
Time-series classificationPENDIGITS (test)
Accuracy95.3
36
Time-series classificationWALK VS RUN (test)
Accuracy100
27
Time-series classificationUWAVE (test)
Accuracy87
27
Time-series classificationCMUSUBJECT16 (test)
Accuracy99.3
19
Time-series classificationPEMS (test)
Accuracy76.9
16
Time-series classificationJapanese Vowels (test)
Accuracy98.6
14
Time-series classificationDIGITSHAPES (test)
Accuracy100
14
Time-series classificationECG (test)
Accuracy78.2
14
Video GenerationKTH
FVD Score92.34
8
Showing 10 of 26 rows

Other info

Follow for update