Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Splintering Nonconcatenative Languages for Better Tokenization

About

Common subword tokenization algorithms like BPE and UnigramLM assume that text can be split into meaningful units by concatenative measures alone. This is not true for languages such as Hebrew and Arabic, where morphology is encoded in root-template patterns, or Malay and Georgian, where split affixes are common. We present SPLINTER, a pre-processing step which rearranges text into a linear form that better represents such nonconcatenative morphologies, enabling meaningful contiguous segments to be found by the tokenizer. We demonstrate SPLINTER's merit using both intrinsic measures evaluating token vocabularies in Hebrew, Arabic, and Malay; as well as on downstream tasks using BERT-architecture models trained for Hebrew.

Bar Gazit, Shaltiel Shmidman, Avi Shmidman, Yuval Pinter (1) __INSTITUTION_4__ Ben-Gurion University of the Negev, (2) DICTA)• 2025

Related benchmarks

TaskDatasetResultRank
Dependency ParsingHebrew (he) (test)
LAS89
10
Prefix SegmentationHebrew Segmentation (test)
Accuracy99.3
2
Question AnsweringHebrew QA (test)
F1 Score74.4
2
Showing 3 of 3 rows

Other info

Code

Follow for update