Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mapping Networks

About

The escalating parameter counts in modern deep learning models pose a fundamental challenge to efficient training and resolution of overfitting. We address this by introducing the \emph{Mapping Networks} which replace the high dimensional weight space by a compact, trainable latent vector based on the hypothesis that the trained parameters of large networks reside on smooth, low-dimensional manifolds. Henceforth, the Mapping Theorem enforced by a dedicated Mapping Loss, shows the existence of a mapping from this latent space to the target weight space both theoretically and in practice. Mapping Networks significantly reduce overfitting and achieve comparable to better performance than target network across complex vision and sequence tasks, including Image Classification, Deepfake Detection etc, with $\mathbf{99.5\%}$, i.e., around $500\times$ reduction in trainable parameters.

Lord Sen, Shyamapada Mukherjee• 2026

Related benchmarks

TaskDatasetResultRank
Semantic segmentationCityscapes (test)
mIoU48.23
1145
Image ClassificationFashionMNIST (test)
Accuracy94.83
218
Image ClassificationMNIST (test)
Test Accuracy99.67
126
Deepfake DetectionFF++ (test)--
39
Deepfake DetectionCeleb-DF (test)
Accuracy95.1
24
Time-series AnalysisAir Pollution
MSE Loss6.10e-4
3
Showing 6 of 6 rows

Other info

Follow for update