Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Stochastic Shared Embeddings: Data-driven Regularization of Embedding Layers

About

In deep neural nets, lower level embedding layers account for a large portion of the total number of parameters. Tikhonov regularization, graph-based regularization, and hard parameter sharing are approaches that introduce explicit biases into training in a hope to reduce statistical complexity. Alternatively, we propose stochastically shared embeddings (SSE), a data-driven approach to regularizing embedding layers, which stochastically transitions between embeddings during stochastic gradient descent (SGD). Because SSE integrates seamlessly with existing SGD algorithms, it can be used with only minor modifications when training large scale neural networks. We develop two versions of SSE: SSE-Graph using knowledge graphs of embeddings; SSE-SE using no prior information. We provide theoretical guarantees for our method and show its empirical effectiveness on 6 distinct tasks, from simple neural networks with one hidden layer in recommender systems, to the transformer and BERT in natural languages. We find that when used along with widely-used regularization methods such as weight decay and dropout, our proposed SSE can further reduce overfitting, which often leads to more favorable generalization results.

Liwei Wu, Shuqing Li, Cho-Jui Hsieh, James Sharpnack• 2019

Related benchmarks

TaskDatasetResultRank
Next-item recommendationMen Amazon (test)
HR@1039.7
29
Next-item recommendationFashion Amazon (test)
HR@100.385
29
Next-item recommendationGames Amazon (test)
HR@100.754
27
Next-item recommendationAmazon Beauty (test)
HR@1048.1
15
Sequential RecommendationGames
Average Batch Runtime (s)0.015
9
Showing 5 of 5 rows

Other info

Follow for update