Nanbeige4.1-3B: A Small General Model that Reasons, Aligns, and Acts
About
We present Nanbeige4.1-3B, a unified generalist language model that simultaneously achieves strong agentic behavior, code generation, and general reasoning with only 3B parameters. To the best of our knowledge, it is the first open-source small language model (SLM) to achieve such versatility in a single model. To improve reasoning and preference alignment, we combine point-wise and pair-wise reward modeling, ensuring high-quality, human-aligned responses. For code generation, we design complexity-aware rewards in Reinforcement Learning, optimizing both correctness and efficiency. In deep search, we perform complex data synthesis and incorporate turn-level supervision during training. This enables stable long-horizon tool interactions, allowing Nanbeige4.1-3B to reliably execute up to 600 tool-call turns for complex problem-solving. Extensive experimental results show that Nanbeige4.1-3B significantly outperforms prior models of similar scale, such as Nanbeige4-3B-2511 and Qwen3-4B, even achieving superior performance compared to much larger models, such as Qwen3-30B-A3B. Our results demonstrate that small models can achieve both broad competence and strong specialization simultaneously, redefining the potential of 3B parameter models.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Scientific Reasoning | GPQA | Accuracy83.8 | 55 | |
| Deep search | xBench DeepSearch (05) | Score75 | 14 | |
| Deep search | GAIA Text-Only | Score0.699 | 14 | |
| Deep search | HLE text only | Score22.29 | 14 | |
| Deep search | Browse Comp ZH | Score31.83 | 14 | |
| Deep search | Browse Comp | Score19.12 | 14 | |
| Deep search | SEAL 0 | Score41.44 | 11 | |
| Preference Modeling | Arena-Hard v2 | Win Rate73.2 | 9 | |
| Deep search | xBench DeepSearch-10 | Score39 | 8 | |
| Code Generation | LiveCodeBench (LCB) V6 | Pass@176.9 | 6 |