Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

How to train your neural ODE: the world of Jacobian and kinetic regularization

About

Training neural ODEs on large datasets has not been tractable due to the necessity of allowing the adaptive numerical ODE solver to refine its step size to very small values. In practice this leads to dynamics equivalent to many hundreds or even thousands of layers. In this paper, we overcome this apparent difficulty by introducing a theoretically-grounded combination of both optimal transport and stability regularizations which encourage neural ODEs to prefer simpler dynamics out of all the dynamics that solve a problem well. Simpler dynamics lead to faster convergence and to fewer discretizations of the solver, considerably decreasing wall-clock time without loss in performance. Our approach allows us to train neural ODE-based generative models to the same performance as the unregularized dynamics, with significant reductions in training time. This brings neural ODEs closer to practical relevance in large-scale applications.

Chris Finlay, J\"orn-Henrik Jacobsen, Levon Nurbekyan, Adam M Oberman• 2020

Related benchmarks

TaskDatasetResultRank
Density EstimationCIFAR-10 (test)
Bits/dim3.38
134
Density EstimationImageNet 32x32 (test)
Bits per Sub-pixel2.36
66
Density EstimationImageNet 64x64 (test)
Bits Per Sub-Pixel3.83
62
Generative ModelingCIFAR-10
BPD3.38
46
Unconditional Image GenerationCIFAR10
BPD3.38
33
Unconditional Image GenerationImageNet-32
BPD3.36
31
Unconditional Image GenerationImageNet 64
BPD3.83
22
Intermediate distribution restorationSingle-cell data (intermediate time points ti for i in {1, 2, 3})
W1 Score0.825
15
Generative ModelingImageNet 64x64 downsampled
Bits Per Dimension3.83
13
Image ModelingImageNet 64x64 (val)
NLL (bits/dim)3.83
11
Showing 10 of 14 rows

Other info

Follow for update