Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Flash Multi-Head Feed-Forward Network

About

We explore Multi-Head FFN (MH-FFN) as a replacement of FFN in the Transformer architecture, motivated by the structural similarity between single-head attention and FFN. While multi-head mechanisms enhance expressivity in attention, naively applying them to FFNs faces two challenges: memory consumption scaling with the head count, and an imbalanced ratio between the growing intermediate size and the fixed head dimension as models scale, which degrades scalability and expressive power. To address these challenges, we propose Flash Multi-Head FFN (FlashMHF), with two key innovations: an I/O-aware fused kernel computing outputs online in SRAM akin to FlashAttention, and a design using dynamically weighted parallel sub-networks to maintain a balanced ratio between intermediate and head dimensions. Validated on models from 128M to 1.3B parameters, FlashMHF consistently improves perplexity and downstream task accuracy over SwiGLU FFNs, while reducing peak memory usage by 3-5x and accelerating inference by up to 1.08x. Our work establishes the multi-head design as a superior architectural principle for FFNs, presenting FlashMHF as a powerful, efficient, and scalable alternative to FFNs in Transformers.

Minshen Zhang, Xiang Hu, Jianguo Li, Wei Wu, Kewei Tu• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingPG-19 (val)--
19
Natural Language UnderstandingDownstream Benchmarks (HellaSwag, SIQA, PIQA, OBQA, WinoGrande, RACE)
HellaSwag42.96
10
Showing 2 of 2 rows

Other info

Follow for update