Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learned in Translation: Contextualized Word Vectors

About

Computer vision has benefited from initializing multiple deep layers with weights pretrained on large supervised training sets like ImageNet. Natural language processing (NLP) typically sees initialization of only the lowest layer of deep models with pretrained word vectors. In this paper, we use a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation (MT) to contextualize word vectors. We show that adding these context vectors (CoVe) improves performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks: sentiment analysis (SST, IMDb), question classification (TREC), entailment (SNLI), and question answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe improves performance of our baseline models to the state of the art.

Bryan McCann, James Bradbury, Caiming Xiong, Richard Socher• 2017

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceSNLI (test)
Accuracy88.1
681
Question AnsweringSQuAD v1.1 (dev)
F1 Score79.9
375
Sentiment AnalysisIMDB (test)
Accuracy91.8
248
Text ClassificationSST-2 (test)
Accuracy90.3
185
Sentiment AnalysisSST-5 (test)
Accuracy53.7
173
Sentiment ClassificationIMDB (test)
Error Rate8.2
144
Text ClassificationTREC (test)--
113
Question AnsweringSQuAD v1.1 (val)
F1 Score79.9
70
Text ClassificationSST-5 (test)
Accuracy53.7
58
6-way question classificationTREC 6-class (test)
Accuracy95.8
23
Showing 10 of 13 rows

Other info

Code

Follow for update