Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

WaveMix: Resource-efficient Token Mixing for Images

About

Although certain vision transformer (ViT) and CNN architectures generalize well on vision tasks, it is often impractical to use them on green, edge, or desktop computing due to their computational requirements for training and even testing. We present WaveMix as an alternative neural architecture that uses a multi-scale 2D discrete wavelet transform (DWT) for spatial token mixing. Unlike ViTs, WaveMix neither unrolls the image nor requires self-attention of quadratic complexity. Additionally, DWT introduces another inductive bias -- besides convolutional filtering -- to utilize the 2D structure of an image to improve generalization. The multi-scale nature of the DWT also reduces the requirement for a deeper architecture compared to the CNNs, as the latter relies on pooling for partial spatial mixing. WaveMix models show generalization that is competitive with ViTs, CNNs, and token mixers on several datasets while requiring lower GPU RAM (training and testing), number of computations, and storage. WaveMix have achieved State-of-the-art (SOTA) results in EMNIST Byclass and EMNIST Balanced datasets.

Pranav Jeevan, Amit Sethi• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationMNIST (test)
Accuracy99.71
882
Image ClassificationCIFAR-100
Top-1 Accuracy70.2
622
Image ClassificationFashion MNIST (test)
Accuracy93.91
568
Image ClassificationSVHN (test)--
362
Image ClassificationSTL-10 (test)
Accuracy70.88
357
Image ClassificationTiny-ImageNet
Accuracy52.03
227
Image ClassificationCIFAR-10
Top-1 Accuracy91.08
124
Image ClassificationCaltech-256 (test)
Top-1 Acc54.62
59
Image ClassificationEMNIST Balanced (test)
Accuracy91.06
26
Handwritten Character ClassificationEMNIST-Letters (test)
Accuracy95.78
15
Showing 10 of 16 rows

Other info

Code

Follow for update