Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CWoMP: Morpheme Representation Learning for Interlinear Glossing

About

Interlinear glossed text (IGT) is a standard notation for language documentation which is linguistically rich but laborious to produce manually. Recent automated IGT methods treat glosses as character sequences, neglecting their compositional structure. We propose CWoMP (Contrastive Word-Morpheme Pretraining), which instead treats morphemes as atomic form-meaning units with learned representations. A contrastively trained encoder aligns words-in-context with their constituent morphemes in a shared embedding space; an autoregressive decoder then generates the morpheme sequence by retrieving entries from a mutable lexicon of these embeddings. Predictions are interpretable--grounded in lexicon entries--and users can improve results at inference time by expanding the lexicon without retraining. We evaluate on diverse low-resource languages, showing that CWoMP outperforms existing methods while being significantly more efficient, with particularly strong gains in extremely low-resource settings.

Morris Alper, Enora Rice, Bhargav Shandilya, Alexis Palmer, Lori Levin• 2026

Related benchmarks

TaskDatasetResultRank
GlossingArapaho arp (test)
Morpheme Accuracy84.1
3
GlossingGitksan git (test)
Morpheme Accuracy14.8
3
GlossingLezgian lez (test)
Morpheme Accuracy69.3
3
GlossingNatuügu ntu (test)
Morpheme Accuracy67.8
3
GlossingNyangbo nyb (test)
Morpheme Accuracy89.1
3
Morphological SegmentationSIGMORPHON arp 2023 (test)
Morpheme Error Rate7
3
Morphological SegmentationSIGMORPHON ddo 2023 (test)
Morpheme Error Rate2
3
Morphological SegmentationSIGMORPHON git 2023 (test)
Morpheme Error Rate0.57
3
Morphological SegmentationSIGMORPHON lez 2023 (test)
Morpheme Error Rate11
3
Morphological SegmentationSIGMORPHON nyb 2023 (test)
Morpheme Error Rate2
3
Showing 10 of 17 rows

Other info

Follow for update