Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Ordered Memory

About

Stack-augmented recurrent neural networks (RNNs) have been of interest to the deep learning community for some time. However, the difficulty of training memory models remains a problem obstructing the widespread use of such models. In this paper, we propose the Ordered Memory architecture. Inspired by Ordered Neurons (Shen et al., 2018), we introduce a new attention-based mechanism and use its cumulative probability to control the writing and erasing operation of the memory. We also introduce a new Gated Recursive Cell to compose lower-level representations into higher-level representation. We demonstrate that our model achieves strong performance on the logical inference task (Bowman et al., 2015)and the ListOps (Nangia and Bowman, 2018) task. We can also interpret the model to retrieve the induced tree structure, and find that these induced structures align with the ground truth. Finally, we evaluate our model on the Stanford SentimentTreebank tasks (Socher et al., 2013), and find that it performs comparatively with the state-of-the-art methods in the literature.

Yikang Shen, Shawn Tan, Arian Hosseini, Zhouhan Lin, Alessandro Sordoni, Aaron Courville• 2019

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceSNLI
Accuracy85.5
174
Natural Language UnderstandingGLUE (val)
SST-290.4
170
Natural Language InferenceMNLI (matched)
Accuracy72.5
110
Natural Language InferenceMNLI (mismatched)
Accuracy73.2
68
Natural Language InferenceSNLI hard 1.0 (test)
Accuracy70.6
27
Paraphrase DetectionPAWS QQP
Accuracy38.1
16
Logical Expression EvaluationListOps-O near-IID (Lengths < 1000, Arguments < 5)
Accuracy99.9
11
Logical Expression EvaluationListOps-O Argument Generalization (Arguments 10)
Accuracy0.8415
11
Logical Expression EvaluationLRA ListOps Length 2000 Arguments 10
Accuracy80.1
11
Logical Expression EvaluationListOps-O Length Generalization (Lengths 200-300)
Accuracy99.6
11
Showing 10 of 30 rows

Other info

Follow for update