MIXAR: Scaling Autoregressive Pixel-based Language Models to Multiple Languages and Scripts
About
Pixel-based language models are gaining momentum as alternatives to traditional token-based approaches, promising to circumvent tokenization challenges. However, the inherent perceptual diversity across languages poses a significant hurdle for multilingual generalization in pixel space. This paper introduces MIXAR, the first generative pixel-based language model trained on eight different languages utilizing a range of different scripts. We empirically evaluate MIXAR against previous pixel-based models as well as comparable tokenizer-based models, demonstrating substantial performance improvement on discriminative and generative multilingual tasks. Additionally, we show how MIXAR is robust to languages never seen during the training. These results are further strengthened when scaling the model to 0.5B parameters which not only improves its capabilities in generative tasks like LAMBADA but also its robustness when challenged with input perturbations such as orthographic attacks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Natural Language Understanding | GLUE | SST-290 | 531 | |
| Natural Language Inference | XNLI (test) | -- | 167 | |
| Topic Classification | SIB200 | -- | 11 | |
| Last-word prediction | LAMBADA de | Zero-Shot Accuracy9.7 | 7 | |
| Last-word prediction | LAMBADA fr | Zero-shot Accuracy10.4 | 7 | |
| Last-word prediction | LAMBADA it | Zero-shot Accuracy11.1 | 7 | |
| Last-word prediction | LAMBADA Average over en, de, es, fr, it | Zero-shot Accuracy9.6 | 7 | |
| Last-word prediction | LAMBADA es | Accuracy (Zero-Shot)4.1 | 7 | |
| Question Answering | bAbI en | Few-Shot Accuracy22.5 | 7 | |
| Last-word prediction | LAMBADA en | Zero-shot Acc12.6 | 7 |