Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MambaMixer: Efficient Selective State Space Models with Dual Token and Channel Selection

About

Recent advances in deep learning have mainly relied on Transformers due to their data dependency and ability to learn at scale. The attention module in these architectures, however, exhibits quadratic time and space in input size, limiting their scalability for long-sequence modeling. Despite recent attempts to design efficient and effective architecture backbone for multi-dimensional data, such as images and multivariate time series, existing models are either data independent, or fail to allow inter- and intra-dimension communication. Recently, State Space Models (SSMs), and more specifically Selective State Space Models, with efficient hardware-aware implementation, have shown promising potential for long sequence modeling. Motivated by the success of SSMs, we present MambaMixer, a new architecture with data-dependent weights that uses a dual selection mechanism across tokens and channels, called Selective Token and Channel Mixer. MambaMixer connects selective mixers using a weighted averaging mechanism, allowing layers to have direct access to early features. As a proof of concept, we design Vision MambaMixer (ViM2) and Time Series MambaMixer (TSM2) architectures based on the MambaMixer block and explore their performance in various vision and time series forecasting tasks. Our results underline the importance of selective mixing across both tokens and channels. In ImageNet classification, object detection, and semantic segmentation tasks, ViM2 achieves competitive performance with well-established vision models and outperforms SSM-based vision models. In time series forecasting, TSM2 achieves outstanding performance compared to state-of-the-art methods while demonstrating significantly improved computational cost. These results show that while Transformers, cross-channel attention, and MLPs are sufficient for good performance in time series forecasting, neither is necessary.

Ali Behrouz, Michele Santacatterina, Ramin Zabih• 2024

Related benchmarks

TaskDatasetResultRank
Long-term time-series forecastingWeather
MSE0.239
348
Long-term time-series forecastingTraffic
MSE0.42
278
Long-term forecastingETTm1
MSE0.361
184
Long-term forecastingETTh1
MSE0.403
179
Long-term forecastingETTm2
MSE0.267
174
Long-term forecastingETTh2
MSE0.333
163
Long-term time-series forecastingECL
MSE0.169
134
Long-term forecastingExchange
MSE0.443
46
Showing 8 of 8 rows

Other info

Follow for update