Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

drsphelps at SemEval-2022 Task 2: Learning idiom representations using BERTRAM

About

This paper describes our system for SemEval-2022 Task 2 Multilingual Idiomaticity Detection and Sentence Embedding sub-task B. We modify a standard BERT sentence transformer by adding embeddings for each idioms, which are created using BERTRAM and a small number of contexts. We show that this technique increases the quality of idiom representations and leads to better performance on the task. We also perform analysis on our final results and show that the quality of the produced idiom embeddings is highly sensitive to the quality of the input contexts.

Dylan Phelps• 2022

Related benchmarks

TaskDatasetResultRank
Semantic Textual SimilarityEnglish STS
Average Score76.43
68
Semantic Textual SimilaritySemEval-2022 Task 2 Idiomatic STS (evaluation)
Spearman Rho (All)0.6504
14
Semantic Textual SimilaritySemEval STS Portuguese (PT)
Overall Score73.07
3
Semantic Textual SimilaritySemEval STS Galician (GL)
MWE Score29.24
3
Showing 4 of 4 rows

Other info

Follow for update