Diffutron: A Masked Diffusion Language Model for Turkish Language
About
Masked Diffusion Language Models (MDLMs) have emerged as a compelling non-autoregressive alternative to standard large language models; however, their application to morphologically rich languages remains limited. In this paper, we introduce $\textit{Diffutron}$, a masked diffusion language model specifically designed for Turkish. Our approach leverages a resource-efficient training pipeline, starting with LoRA-based continual pre-training of a multilingual encoder on a large-scale corpus. To enable generative capabilities, we employ a progressive instruction-tuning strategy, sequentially adapting the model on general and task-specific instruction sets. Experimental results across comprehensive benchmarks demonstrate that, despite its compact size, our model achieves competitive performance compared to existing multi-billion-parameter baselines. These findings validate the effectiveness of masked diffusion modeling combined with multi-stage tuning for non-autoregressive text generation in Turkish.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Machine Reading Comprehension | BELEBELE Target Language | MRC Score27 | 24 | |
| Semantic Textual Similarity | STSb-TR | STSb Score18.78 | 8 | |
| Irony Detection | IronyTR | Score52 | 8 | |
| Cross-lingual Question Answering | EXAMS TR | Score27.74 | 8 | |
| News Category Classification | News Category Classification | Score32.4 | 8 | |
| Natural Language Inference | MNLI TR | Score33.29 | 8 | |
| Causal Reasoning | XCOPA | XCOPA Causal Reasoning Score53.8 | 8 |