Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

NorwAI's Large Language Models: Technical Report

About

Norwegian, spoken by approximately five million people, remains underrepresented in many of the most significant breakthroughs in Natural Language Processing (NLP). To address this gap, the NorLLM team at NorwAI has developed a family of models specifically tailored to Norwegian and other Scandinavian languages, building on diverse Transformer-based architectures such as GPT, Mistral, Llama2, Mixtral and Magistral. These models are either pretrained from scratch or continually pretrained on 25B - 88.45B tokens, using a Norwegian-extended tokenizer and advanced post-training strategies to optimize performance, enhance robustness, and improve adaptability across various real-world tasks. Notably, instruction-tuned variants (e.g., Mistral-7B-Instruct and Mixtral-8x7B-Instruct) showcase strong assistant-style capabilities, underscoring their potential for practical deployment in interactive and domain-specific applications. The NorwAI large language models are openly available to Nordic organizations, companies and students for both research and experimental use. This report provides detailed documentation of the model architectures, training data, tokenizer design, fine-tuning strategies, deployment, and evaluations.

Jon Atle Gulla, Peng Liu, Lemei Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Large Language Model EvaluationNorEval (test)
Overall Score0.455
8
News SummarizationCNN/DailyMail
BLEU5.41
8
Open-domain ConversationNO-ConvAI2 NLEBench (test)
BLEU4.28
7
ParaphraseNO-MRPC NLEBench (test)
Accuracy73.7
6
Question AnsweringNO-BoolQ NLEBench (test)
Accuracy0.632
6
Natural Language InferenceNO-QNLI NLEBench (test)
Accuracy79.7
6
Showing 6 of 6 rows

Other info

Follow for update