Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Semantic Textual Similarity from Conversations

About

We present a novel approach to learn representations for sentence-level semantic similarity using conversational data. Our method trains an unsupervised model to predict conversational input-response pairs. The resulting sentence embeddings perform well on the semantic textual similarity (STS) benchmark and SemEval 2017's Community Question Answering (CQA) question similarity subtask. Performance is further improved by introducing multitask training combining the conversational input-response prediction task and a natural language inference task. Extensive experiments show the proposed model achieves the best performance among all neural models on the STS benchmark and is competitive with the state-of-the-art feature engineered and mixed systems in both tasks.

Yinfei Yang, Steve Yuan, Daniel Cer, Sheng-yi Kong, Noah Constant, Petr Pilar, Heming Ge, Yun-Hsuan Sung, Brian Strope, Ray Kurzweil• 2018

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceSNLI (test)
Accuracy84.1
681
Semantic Textual SimilaritySTS Benchmark (dev)
Pearson Correlation (r)0.835
21
Semantic Textual SimilaritySTS Benchmark (test)
Pearson Correlation (r)0.808
16
Community Question AnsweringSemEvalCQA (test)
MAP47.42
13
Showing 4 of 4 rows

Other info

Follow for update