Efficient and Effective Internal Memory Retrieval for LLM-Based Healthcare Prediction
About
Large language models (LLMs) hold significant promise for healthcare, yet their reliability in high-stakes clinical settings is often compromised by hallucinations and a lack of granular medical context. While Retrieval Augmented Generation (RAG) can mitigate these issues, standard supervised pipelines require computationally intensive searches over massive external knowledge bases, leading to high latency that is impractical for time-sensitive care. To address this, we introduce Keys to Knowledge (K2K), a novel framework that replaces external retrieval with internal, key-based knowledge access. By encoding essential clinical information directly into the model's parameter space, K2K enables rapid retrieval from internal key-value memory without inference-time overhead. We further enhance retrieval quality through activation-guided probe construction and cross-attention reranking. Experimental results demonstrate that K2K achieves state-of-the-art performance across four benchmark healthcare outcome prediction datasets.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Readmission prediction | MIMIC IV | AUC-ROC0.6647 | 70 | |
| Mortality Prediction | MIMIC-III | AUROC61.05 | 46 | |
| Readmission prediction | MIMIC-III (target) | AUPRC62.49 | 35 | |
| Mortality Prediction | MIMIC IV | F1 Score6.61 | 16 |