Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Racka: Efficient Hungarian LLM Adaptation on Academic Infrastructure

About

We present Racka, a lightweight, continually pretrained large language model designed to bridge the resource gap between Hungarian and high-resource languages such as English and German. Racka employs parameter-efficient continual pretraining via Low-Rank Adaptation (LoRA) on a Qwen-3 4B backbone, making the recipe practical on A100 (40GB)-based HPC clusters with low inter-node bandwidth. To better match the training distribution, we replace and adapt the tokenizer, achieving substantially improved tokenization fertility for Hungarian while maintaining competitive performance in English and German. The model is trained on 160B subword tokens drawn from a mixture of internet and high-quality curated sources, with a composition of 44% Hungarian, 24% English, 21% German, and 11% code. This data mix is chosen to mitigate catastrophic forgetting and preserve high-resource language capabilities during continual pretraining. Our preliminary results indicate modest but stable results in language adaptation.

Zsolt Csibi, Bence Gy\"orgy Gortka, Natabara Gy\"ongy\"ossy, Korn\'el Nagy, D\'avid M\'ark Nemeskey, Martin Sallai, Andr\'as Simonyi, Andr\'as M\'ark Szekeres, G\'abor Palk\'o (1) __INSTITUTION_9__ Department of Digital Humanities, E\"otv\"os Lor\'and University (2) Department of Artificial Intelligence, E\"otv\"os Lor\'and University)• 2026

Related benchmarks

TaskDatasetResultRank
Contextual Understanding and ReasoningOpenHuEval
HuWildBench WBScore57.17
4
Natural Language UnderstandingHULU (Hungarian Language Understanding) (val)
HuCOLA Accuracy86.2
4
General Language Model ReasoningLM-Eval-Harness Hungarian
Arc (hu) Acc34.5
4
Showing 3 of 3 rows

Other info

Follow for update