PLLuM: A Family of Polish Large Language Models
About
Large Language Models (LLMs) play a central role in modern artificial intelligence, yet their development has been primarily focused on English, resulting in limited support for other languages. We present PLLuM (Polish Large Language Model), the largest open-source family of foundation models tailored specifically for the Polish language. Developed by a consortium of major Polish research institutions, PLLuM addresses the need for high-quality, transparent, and culturally relevant language models beyond the English-centric commercial landscape. We describe the development process, including the construction of a new 140-billion-token Polish text corpus for pre-training, a 77k custom instructions dataset, and a 100k preference optimization dataset. A key component is a Responsible AI framework that incorporates strict data governance and a hybrid module for output correction and safety filtering. We detail the models' architecture, training procedures, and alignment techniques for both base and instruction-tuned variants, and demonstrate their utility in a downstream task within public administration. By releasing these models publicly, PLLuM aims to foster open research and strengthen sovereign AI technologies in Poland.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Linguistic and Cultural Competency | Polish Linguistic and Cultural Competency Benchmark (PLCC) | Avg Score69.67 | 52 | |
| Emotional Intelligence | Polish EQ-Bench | Overall Score72.56 | 34 | |
| Polish Text Understanding | CPTUB | Overall Avg3.67 | 31 | |
| Medical Knowledge Performance | Polish Board Certification Examinations (test) | Average Score (%)38.53 | 29 | |
| Language Understanding | INCLUDE base 44 | Average Score44.2 | 21 |