Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

KromHC: Manifold-Constrained Hyper-Connections with Kronecker-Product Residual Matrices

About

The success of Hyper-Connections (HC) in neural networks (NN) has also highlighted issues related to its training instability and restricted scalability. The Manifold-Constrained Hyper-Connections (mHC) mitigate these challenges by projecting the residual connection space onto a Birkhoff polytope, however, it faces two issues: 1) its iterative Sinkhorn-Knopp (SK) algorithm does not always yield exact doubly stochastic residual matrices; 2) mHC incurs a prohibitive $\mathcal{O}(n^3C)$ parameter complexity with $n$ as the width of the residual stream and $C$ as the feature dimension. The recently proposed mHC-lite reparametrizes the residual matrix via the Birkhoff-von-Neumann theorem to guarantee double stochasticity, but also faces a factorial explosion in its parameter complexity, $\mathcal{O} \left( nC \cdot n! \right)$. To address both challenges, we propose \textbf{KromHC}, which uses the \underline{Kro}necker products of smaller doubly stochastic matrices to parametrize the residual matrix in \underline{mHC}. By enforcing manifold constraints across the factor residual matrices along each mode of the tensorized residual stream, KromHC guarantees exact double stochasticity of the residual matrices while reducing parameter complexity to $\mathcal{O}(n^2C)$. Comprehensive experiments demonstrate that KromHC matches or even outperforms state-of-the-art (SOTA) mHC variants, while requiring significantly fewer trainable parameters. The code is available at \texttt{https://github.com/wz1119/KromHC}.

Wuyang Zhou, Yuxuan Gu, Giorgos Iacovides, Danilo Mandic• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningCommonsense Reasoning Suite (test)
HellaSwag Accuracy0.364
62
Downstream Performance EvaluationCORE
CORE Score16.872
17
Language Modeling and ReasoningBigBench (Lamb, SQuAD, CoQA, BBH, LSAT, LangID)
Avg Score24
8
LLM PretrainingFineWeb-Edu (train)
Training Loss2.966
8
LLM PretrainingFineWeb-Edu (val)
BPB0.862
8
Language Modeling EvaluationTinyStories
Grammar6.56
5
Story GenerationTinyStories
Grammar Score6.04
5
Story Generation EvaluationTinyStories GPT-4.1 Nano
Grammar6.26
5
Showing 8 of 8 rows

Other info

GitHub

Follow for update