Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Scalable Gradients for Stochastic Differential Equations

About

The adjoint sensitivity method scalably computes gradients of solutions to ordinary differential equations. We generalize this method to stochastic differential equations, allowing time-efficient and constant-memory computation of gradients with high-order adaptive solvers. Specifically, we derive a stochastic differential equation whose solution is the gradient, a memory-efficient algorithm for caching noise, and conditions under which numerical solutions converge. In addition, we combine our method with gradient-based stochastic variational inference for latent stochastic differential equations. We use our method to fit stochastic dynamics defined by neural networks, achieving competitive performance on a 50-dimensional motion capture dataset.

Xuechen Li, Ting-Kam Leonard Wong, Ricky T. Q. Chen, David Duvenaud• 2020

Related benchmarks

TaskDatasetResultRank
Trajectory InferenceEB dataset 5D (test)
W1 (t=1)0.91
23
Continuous sequence predictionCOVID-19 SIR dynamics in Japan (standard)
MSE1.086
8
reach velocity decoding (prediction)monkey reaching
R^218.7
7
reach velocity decoding (smoothing)monkey reaching
R^276.6
7
Continuous sequence predictionCOVID-19 SIR dynamics in Japan (extended)
MSE3.185
6
x-y position decoding (smoothing)bouncing ball
R^2 Score0.813
6
x-y position decoding (prediction)bouncing ball
R^2 Score0.231
6
angular velocity decoding (prediction)Pendulum
R^20.138
6
angular velocity decoding (smoothing)Pendulum
R-squared92.1
6
Future Frame PredictionCMU Motion Capture Subject 35 (test)--
1
Showing 10 of 10 rows

Other info

Code

Follow for update