Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Mi:dm K 2.5 Pro

About

The evolving LLM landscape requires capabilities beyond simple text generation, prioritizing multi-step reasoning, long-context understanding, and agentic workflows. This shift challenges existing models in enterprise environments, especially in Korean-language and domain-specific scenarios where scaling is insufficient. We introduce Mi:dm K 2.5 Pro, a 32B parameter flagship LLM designed to address enterprise-grade complexity through reasoning-focused optimization. Our methodology builds a robust data foundation via a quality-centric curation pipeline utilizing abstract syntax tree (AST) analysis for code, gap-filling synthesis for mathematics, and an LLM-based quality evaluator. Pre-training scales the model via layer-predictor-based Depth Upscaling (DuS) and a progressive strategy supporting a 128K token context window. Post-training introduces a specialized multi-stage pipeline, including Reasoning SFT, model merging, and asynchronous reinforcement learning (RL), to develop complex problem-solving skills. "Fusion Training" then rebalances these capabilities with conversational fluency, consistent response styling, and reliable tool-use. The evaluations show that Mi:dm K 2.5 Pro achieves competitive performance against leading global and domestic models. In addition, it sets state-of-the-art results on Korean-specific benchmarks, showcasing deep linguistic and cultural understanding. Finally, Responsible AI evaluations validate safety against attacks, ensuring a secure profile for deployment with a balance of harmlessness and responsiveness.

KT Tech innovation Group• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval--
625
CodingHumanEval+
Pass@192.07
83
CodingMBPP+
Pass@189.68
52
General KnowledgeMMLU-Pro
MMLU-Pro General Knowledge EM81.8
22
CodingLiveCodeBench v6
Pass@174.79
20
MathematicsAIME25
Exact Match70
18
Trustworthiness evaluationLLM Trustworthiness Benchmark
Bias Score89.58
17
Bias EvaluationKoBBQ
Ambiguous Context Score94.56
17
Instruction FollowingKo-IFEval
Overall Score85.6
13
Language ComprehensionKorean Comprehension 1.0 (test)
Ko-Sov (EM)73.5
9
Showing 10 of 23 rows

Other info

Follow for update