Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Hierarchically Gated Recurrent Neural Network for Sequence Modeling

About

Transformers have surpassed RNNs in popularity due to their superior abilities in parallel training and long-term dependency modeling. Recently, there has been a renewed interest in using linear RNNs for efficient sequence modeling. These linear RNNs often employ gating mechanisms in the output of the linear recurrence layer while ignoring the significance of using forget gates within the recurrence. In this paper, we propose a gated linear RNN model dubbed Hierarchically Gated Recurrent Neural Network (HGRN), which includes forget gates that are lower bounded by a learnable value. The lower bound increases monotonically when moving up layers. This allows the upper layers to model long-term dependencies and the lower layers to model more local, short-term dependencies. Experiments on language modeling, image classification, and long-range arena benchmarks showcase the efficiency and effectiveness of our proposed model. The source code is available at https://github.com/OpenNLPLab/HGRN.

Zhen Qin, Songlin Yang, Yiran Zhong• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1K
Top-1 Acc80.09
836
Language ModelingWikiText-103 (test)
Perplexity24.82
524
Commonsense ReasoningCommon Sense Reasoning Tasks
Avg Score49.19
241
Language ModelingWikiText-103 (val)
PPL24.14
180
Long-range sequence modelingLong Range Arena (LRA)
Text Accuracy88.14
164
Language ModelingThe Pile
Perplexity4.14
25
Unified Multi-task Language Understanding and Instruction FollowingOpen LLM Leaderboard v1 (test)
MMLU-P Accuracy11.4
19
Comparative RankingUnified Evaluation v1 (aggregate)
Average Rank5.75
19
Commonsense Reasoning and Knowledge Question AnsweringGeneral Ability Suite (ARC, HellaSwag, PIQA, BoolQ, WinoGrande, COPA, OBQA, SciQ) various (test)
ARC-C Accuracy27.1
19
Language ModelingOpen LLM Leaderboard & General Ability Benchmarks (MMLU-P, GPQA, BBH, MATH, MuSR, IFEval, ARC, Hellaswag, PIQA, BoolQ, WinoGrande, COPA, OpenBookQA, SciQ) unified (test)
MMLU-P Accuracy11.4
16
Showing 10 of 12 rows

Other info

Code

Follow for update