Just Rank: Rethinking Evaluation with Word and Sentence Similarities
About
Word and sentence embeddings are useful feature representations in natural language processing. However, intrinsic evaluation for embeddings lags far behind, and there has been no significant update since the past decade. Word and sentence similarity tasks have become the de facto evaluation method. It leads models to overfit to such evaluations, negatively impacting embedding models' development. This paper first points out the problems using semantic similarity as the gold standard for word and sentence embedding evaluations. Further, we propose a new intrinsic evaluation method called EvalRank, which shows a much stronger correlation with downstream tasks. Extensive experiments are conducted based on 60+ models and popular datasets to certify our judgments. Finally, the practical evaluation toolkit is released for future benchmarking purposes.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Citation Intent Classification | SciCite | Spearman Correlation0.9011 | 23 | |
| Question Classification | TREC | Spearman's rho (x100)78.72 | 23 | |
| Sentiment Analysis | MR | Spearman's rho0.8882 | 23 | |
| Sentiment Analysis | SST2 | Spearman Rho (x100)93.32 | 23 | |
| Sentiment Analysis | SST5 | Spearman's rho (x100)76.65 | 23 | |
| Opinion Polarity Detection | MPQA | Spearman's Rho0.8205 | 12 | |
| Paraphrase Detection | MRPC | Spearman Correlation (x100)30.87 | 12 | |
| Natural Language Entailment | SICK-E | Spearman Rho (x100)62.77 | 12 | |
| Sentiment Analysis | CR | Spearman Correlation89.36 | 11 |