Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Training-Free Large Reasoning Model-based Knowledge Tracing Framework for Unified Prediction and Prescription

About

Knowledge Tracing (KT) aims to estimate a learner's evolving mastery based on interaction histories. Recent studies have explored Large Language Models (LLMs) for KT via autoregressive nature, but such approaches typically require fine-tuning and exhibit unstable or near-random performance. Moreover, prior KT systems primarily focus on prediction and rely on multi-stage pipelines for feedback and recommendation, resulting in increased system complexity and resources. To address this gap, we propose Thinking-KT, a training-free KT framework that incorporates Test-Time Scaling (TTS), enabling even small LLMs to achieve competitive KT performance. Moreover, in this framework, a small LLM can jointly perform KT prediction, personalized feedback generation, and learning recommendation in a unified output without degrading prediction accuracy. Beyond performance, we present the systematic analysis of reasoning traces in KT. Our results demonstrate that TTS is a critical yet underexplored factor in LLM-based KT, and that small LLMs can serve as unified ITS engines.

Unggi Lee, Joo Young Kim, Ran Ju, Minyoung Jung, Jeyeon Eo• 2026

Related benchmarks

TaskDatasetResultRank
Knowledge TracingASSIST09 (test)
AUC72.76
21
Knowledge TracingDBE-KT22 (test)
AUC71.85
21
Knowledge TracingEdNet-500 (test)
AUC0.6969
21
Showing 3 of 3 rows

Other info

Follow for update