Approximately Equivariant Recurrent Generative Models for Quasi-Periodic Time Series with a Progressive Training Scheme
About
We present a simple yet effective generative model for time series, based on a Recurrent Variational Autoencoder that we refer to as AEQ-RVAE-ST. Recurrent layers often struggle with unstable optimization and poor convergence when modeling long sequences. To address these limitations, we introduce a training scheme that subsequently increases the sequence length, stabilizing optimization and enabling consistent learning over extended horizons. By composing known components into a recurrent, approximately time-shift-equivariant topology, our model introduces an inductive bias that aligns with the structure of quasi-periodic and nearly stationary time series. Across several benchmark datasets, AEQ-RVAE-ST matches or surpasses state-of-the-art generative models, particularly on quasi-periodic data, while remaining competitive on more irregular signals. Performance is evaluated through ELBO, Fr\'echet Distance, discriminative metrics, and visualizations of the learned latent embeddings.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Synthetic Time Series Generation | Electric Motor | Average ELBO1.66 | 48 | |
| Synthetic Time Series Generation | ETT | Average ELBO1.6 | 48 | |
| Synthetic Time Series Generation | Sine | Average ELBO Score1.47 | 48 | |
| Synthetic Time Series Generation | MetroPT3 | Average ELBO1.49 | 48 | |
| Synthetic Time Series Generation | ECG | Average ELBO1.64 | 48 | |
| Synthetic Time Series Generation | ECG | FID Score0.08 | 24 | |
| Synthetic Time Series Generation | ETT | FID Score0.58 | 24 | |
| Synthetic Time Series Generation | MetroPT3 | FID Score0.26 | 24 | |
| Time-series generation | Sine | Discriminative Score0.021 | 24 | |
| Synthetic Time Series Generation | Electric Motor | FID Score0.1 | 24 |