Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model

About

Inspired by biological evolution, we explain the rationality of Vision Transformer by analogy with the proven practical Evolutionary Algorithm (EA) and derive that both of them have consistent mathematical representation. Analogous to the dynamic local population in EA, we improve the existing transformer structure and propose a more efficient EAT model, and design task-related heads to deal with different tasks more flexibly. Moreover, we introduce the spatial-filling curve into the current vision transformer to sequence image data into a uniform sequential format. Thus we can design a unified EAT framework to address multi-modal tasks, separating the network architecture from the data format adaptation. Our approach achieves state-of-the-art results on the ImageNet classification task compared with recent vision transformer works while having smaller parameters and greater throughput. We further conduct multi-modal tasks to demonstrate the superiority of the unified EAT, e.g., Text-Based Image Retrieval, and our approach improves the rank-1 by +3.7 points over the baseline on the CSS dataset.

Jiangning Zhang, Chao Xu, Jian Li, Wenzhou Chen, Yabiao Wang, Ying Tai, Shuo Chen, Chengjie Wang, Feiyue Huang, Yong Liu• 2021

Related benchmarks

TaskDatasetResultRank
ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy (%)82.1
1155
Vision-and-Language NavigationR2R (val unseen)--
260
Text Based Image RetrievalMIT-States
Rank-1 Accuracy15
7
Text Based Image RetrievalFashion200k
Rank-1 Acc20.1
7
Text Based Image RetrievalCSS
Rank-173.8
6
Vision-Language NavigationR2R (seen)
Navigation Error (NE)3.84
4
Showing 6 of 6 rows

Other info

Code

Follow for update