Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MIXAR: Scaling Autoregressive Pixel-based Language Models to Multiple Languages and Scripts

About

Pixel-based language models are gaining momentum as alternatives to traditional token-based approaches, promising to circumvent tokenization challenges. However, the inherent perceptual diversity across languages poses a significant hurdle for multilingual generalization in pixel space. This paper introduces MIXAR, the first generative pixel-based language model trained on eight different languages utilizing a range of different scripts. We empirically evaluate MIXAR against previous pixel-based models as well as comparable tokenizer-based models, demonstrating substantial performance improvement on discriminative and generative multilingual tasks. Additionally, we show how MIXAR is robust to languages never seen during the training. These results are further strengthened when scaling the model to 0.5B parameters which not only improves its capabilities in generative tasks like LAMBADA but also its robustness when challenged with input perturbations such as orthographic attacks.

Chen Hu, Yintao Tai, Antonio Vergari, Frank Keller, Alessandro Suglia• 2026

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE
SST-290
531
Natural Language InferenceXNLI (test)--
167
Topic ClassificationSIB200--
11
Last-word predictionLAMBADA de
Zero-Shot Accuracy9.7
7
Last-word predictionLAMBADA fr
Zero-shot Accuracy10.4
7
Last-word predictionLAMBADA it
Zero-shot Accuracy11.1
7
Last-word predictionLAMBADA Average over en, de, es, fr, it
Zero-shot Accuracy9.6
7
Last-word predictionLAMBADA es
Accuracy (Zero-Shot)4.1
7
Question AnsweringbAbI en
Few-Shot Accuracy22.5
7
Last-word predictionLAMBADA en
Zero-shot Acc12.6
7
Showing 10 of 10 rows

Other info

Follow for update