K-EXAONE Technical Report
About
This technical report presents K-EXAONE, a large-scale multilingual language model developed by LG AI Research. K-EXAONE is built on a Mixture-of-Experts architecture with 236B total parameters, activating 23B parameters during inference. It supports a 256K-token context window and covers six languages: Korean, English, Spanish, German, Japanese, and Vietnamese. We evaluate K-EXAONE on a comprehensive benchmark suite spanning reasoning, agentic, general, Korean, and multilingual abilities. Across these evaluations, K-EXAONE demonstrates performance comparable to open-weight models of similar size. K-EXAONE, designed to advance AI for a better life, is positioned as a powerful proprietary AI foundation model for a wide range of industrial and research applications.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Instruction Following | IFEval | -- | 292 | |
| Instruction Following | IFBench | Pass@1 (Strict)40.5 | 68 | |
| Safety | WildJailbreak | -- | 21 | |
| Agentic Tool-use | τ2-Bench (Tau-bench) Retail and Telecom | Overall Success Rate44 | 17 | |
| Math | IMO-AnswerBench | Score40 | 9 | |
| Long-context Reasoning | AA-LCR | Score45.2 | 8 | |
| Mathematical Problem Solving | HMMT Nov 2025 | Score43.2 | 8 | |
| Mathematics | HRM8K | Score81.4 | 8 | |
| Knowledge | KMMLU-Pro | Score63.5 | 7 | |
| Agentic Tool-use | tau2-bench Retail | -- | 6 |