Developing and Evaluating Tiny to Medium-Sized Turkish BERT Models
About
This study introduces and evaluates tiny, mini, small, and medium-sized uncased Turkish BERT models, aiming to bridge the research gap in less-resourced languages. We trained these models on a diverse dataset encompassing over 75GB of text from multiple sources and tested them on several tasks, including mask prediction, sentiment analysis, news classification, and, zero-shot classification. Despite their smaller size, our models exhibited robust performance, including zero-shot task, while ensuring computational efficiency and faster execution times. Our findings provide valuable insights into the development and application of smaller language models, especially in the context of the Turkish language.
Himmet Toprak Kesgin, Muzaffer Kaan Yuce, Mehmet Fatih Amasyali• 2023
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text Embedding | MTEB | MTEB Score57.89 | 45 | |
| Text Embedding | MTEB Turkish (test) | Overall MTEB Score46.23 | 23 | |
| Retrieval | Legal | Legal Score43.8 | 10 | |
| Legal Retrieval | Turkish Legal | Legal Score43.8 | 9 | |
| Masked Language Modeling | Turkish Datasets (blackerx/turkish_v2, fthbrmnby/turkish_product_reviews, hazal/Turkish-Biomedical-corpus-trM, newmindai/EuroHPC-Legal) (test) | MLM Avg (%)65.03 | 7 | |
| Turkish Natural Language Understanding and Retrieval | TabiBench 1.0 (test) | Text Clf F184.25 | 5 | |
| Turkish Natural Language Understanding | TabiBench 1.0 (test) | TabiBench Score72.26 | 4 |
Showing 7 of 7 rows