K-EXAONE Technical Report
About
This technical report presents K-EXAONE, a large-scale multilingual language model developed by LG AI Research. K-EXAONE is built on a Mixture-of-Experts architecture with 236B total parameters, activating 23B parameters during inference. It supports a 256K-token context window and covers six languages: Korean, English, Spanish, German, Japanese, and Vietnamese. We evaluate K-EXAONE on a comprehensive benchmark suite spanning reasoning, agentic, general, Korean, and multilingual abilities. Across these evaluations, K-EXAONE demonstrates performance comparable to open-weight models of similar size. K-EXAONE, designed to advance AI for a better life, is positioned as a powerful proprietary AI foundation model for a wide range of industrial and research applications.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Instruction Following | IFEval | IFEval Accuracy89.7 | 625 | |
| Reasoning | GPQA Diamond | Accuracy79.1 | 135 | |
| Reasoning | MMLU-Pro | Accuracy83.8 | 95 | |
| Instruction Following | IFBench | Pass@1 (Strict)40.5 | 72 | |
| Instruction Following | IFBench | Accuracy67.3 | 33 | |
| Agentic Tool-use | tau2-bench Retail | -- | 22 | |
| Agentic Tool-use | tau2-bench Airline | -- | 22 | |
| Safety | WildJailbreak | -- | 21 | |
| Agentic Tool-use | τ2-Bench (Tau-bench) Retail and Telecom | Overall Success Rate44 | 17 | |
| Reasoning | LiveCodeBench v6 | Score80.7 | 11 |