Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Efficient and Effective Internal Memory Retrieval for LLM-Based Healthcare Prediction

About

Large language models (LLMs) hold significant promise for healthcare, yet their reliability in high-stakes clinical settings is often compromised by hallucinations and a lack of granular medical context. While Retrieval Augmented Generation (RAG) can mitigate these issues, standard supervised pipelines require computationally intensive searches over massive external knowledge bases, leading to high latency that is impractical for time-sensitive care. To address this, we introduce Keys to Knowledge (K2K), a novel framework that replaces external retrieval with internal, key-based knowledge access. By encoding essential clinical information directly into the model's parameter space, K2K enables rapid retrieval from internal key-value memory without inference-time overhead. We further enhance retrieval quality through activation-guided probe construction and cross-attention reranking. Experimental results demonstrate that K2K achieves state-of-the-art performance across four benchmark healthcare outcome prediction datasets.

Mingchen Li, Jiatan Huang, Zonghai Yao, Hong yu• 2026

Related benchmarks

TaskDatasetResultRank
Readmission predictionMIMIC IV
AUC-ROC0.6647
70
Mortality PredictionMIMIC-III
AUROC61.05
46
Readmission predictionMIMIC-III (target)
AUPRC62.49
35
Mortality PredictionMIMIC IV
F1 Score6.61
16
Showing 4 of 4 rows

Other info

Follow for update