Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Bielik v3 Small: Technical Report

About

We introduce Bielik v3, a series of parameter-efficient generative text models (1.5B and 4.5B) optimized for Polish language processing. These models demonstrate that smaller, well-optimized architectures can achieve performance comparable to much larger counterparts while requiring substantially fewer computational resources. Our approach incorporates several key innovations: a custom Polish tokenizer (APT4) that significantly improves token efficiency, Weighted Instruction Cross-Entropy Loss to balance learning across instruction types, and Adaptive Learning Rate that dynamically adjusts based on training progress. Trained on a meticulously curated corpus of 292 billion tokens spanning 303 million documents, these models excel across multiple benchmarks, including the Open PL LLM Leaderboard, Complex Polish Text Understanding Benchmark, Polish EQ-Bench, and Polish Medical Leaderboard. The 4.5B parameter model achieves results competitive with models 2-3 times its size, while the 1.5B model delivers strong performance despite its extremely compact profile. These advances establish new benchmarks for parameter-efficient language modeling in less-represented languages, making high-quality Polish language AI more accessible for resource-constrained applications.

Krzysztof Ociepa, {\L}ukasz Flis, Remigiusz Kinas, Krzysztof Wr\'obel, Adrian Gwo\'zdziej• 2025

Related benchmarks

TaskDatasetResultRank
Linguistic Implicatures DecodingOpen PL LLM Leaderboard Implicatures component base models
Average Score45.47
30
Language UnderstandingINCLUDE base 44
Average Score35.9
21
Large Language Model EvaluationOpen LLM Leaderboard
Average Score64.89
19
Large Language Model EvaluationOpen LLM Leaderboard v1 (test)
Average Score61.02
14
Showing 4 of 4 rows

Other info

Follow for update