CWoMP: Morpheme Representation Learning for Interlinear Glossing
About
Interlinear glossed text (IGT) is a standard notation for language documentation which is linguistically rich but laborious to produce manually. Recent automated IGT methods treat glosses as character sequences, neglecting their compositional structure. We propose CWoMP (Contrastive Word-Morpheme Pretraining), which instead treats morphemes as atomic form-meaning units with learned representations. A contrastively trained encoder aligns words-in-context with their constituent morphemes in a shared embedding space; an autoregressive decoder then generates the morpheme sequence by retrieving entries from a mutable lexicon of these embeddings. Predictions are interpretable--grounded in lexicon entries--and users can improve results at inference time by expanding the lexicon without retraining. We evaluate on diverse low-resource languages, showing that CWoMP outperforms existing methods while being significantly more efficient, with particularly strong gains in extremely low-resource settings.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Glossing | Arapaho arp (test) | Morpheme Accuracy84.1 | 3 | |
| Glossing | Gitksan git (test) | Morpheme Accuracy14.8 | 3 | |
| Glossing | Lezgian lez (test) | Morpheme Accuracy69.3 | 3 | |
| Glossing | Natuügu ntu (test) | Morpheme Accuracy67.8 | 3 | |
| Glossing | Nyangbo nyb (test) | Morpheme Accuracy89.1 | 3 | |
| Morphological Segmentation | SIGMORPHON arp 2023 (test) | Morpheme Error Rate7 | 3 | |
| Morphological Segmentation | SIGMORPHON ddo 2023 (test) | Morpheme Error Rate2 | 3 | |
| Morphological Segmentation | SIGMORPHON git 2023 (test) | Morpheme Error Rate0.57 | 3 | |
| Morphological Segmentation | SIGMORPHON lez 2023 (test) | Morpheme Error Rate11 | 3 | |
| Morphological Segmentation | SIGMORPHON nyb 2023 (test) | Morpheme Error Rate2 | 3 |