Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Memory Networks

About

We describe a new class of learning models called memory networks. Memory networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. The long-term memory can be read and written to, with the goal of using it for prediction. We investigate these models in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. We evaluate them on a large-scale QA task, and a smaller, but more complex, toy task generated from a simulated world. In the latter, we show the reasoning power of such models by chaining multiple supporting sentences to answer questions that require understanding the intension of verbs.

Jason Weston, Sumit Chopra, Antoine Bordes• 2014

Related benchmarks

TaskDatasetResultRank
Machine ComprehensionCNN (val)
Accuracy0.662
80
Machine ComprehensionCNN (test)
Accuracy69.4
77
Machine ComprehensionCBT NE (test)
Accuracy66.6
56
Machine ComprehensionCBT-CN (test)
Accuracy63
56
Question AnsweringbAbI (test)
Mean Error2.81
54
Machine ComprehensionCBT-NE (val)
Accuracy70.4
37
Machine ComprehensionCBT-CN (val)
Accuracy64.2
37
Question AnsweringReverb (test)
Accuracy72
15
Question AnsweringbAbI 10k (test)
Task 1: 1 Supporting Fact Error0.00e+0
15
Machine ComprehensionCBT (test)
Named Entities49.3
12
Showing 10 of 14 rows

Other info

Follow for update