Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient Knowledge Injection in LLMs via Self-Distillation

About

In many practical applications, large language models (LLMs) need to acquire new knowledge not present in their pre-training data. Efficiently leveraging this knowledge usually relies on supervised fine-tuning or retrieval-augmented generation (RAG). Although RAG has emerged as the industry standard for knowledge injection, fine-tuning has not yet achieved comparable success. This paper proposes utilizing prompt distillation, a self-distillation-based method previously explored primarily for style alignment and instruction tuning, to internalize new factual knowledge from free-form documents. Unlike prior methods, our approach requires neither larger teacher models nor structured knowledge formats. Across multiple LLM sizes and model families, we show that prompt distillation outperforms standard supervised fine-tuning and can even surpass RAG. We analyze the key factors contributing to prompt distillation's effectiveness and examine how it scales.

Kalle Kujanp\"a\"a, Pekka Marttinen, Harri Valpola, Alexander Ilin• 2024

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC Challenge
Accuracy72.17
749
Question AnsweringARC Easy
Normalized Acc76.98
385
Reading ComprehensionRACE high
Accuracy58.89
295
Logical reasoningBBH
Accuracy55.41
93
Abstract ReasoningAbsR--
56
Reading ComprehensionRACE Middle School
Accuracy (RACE MS)68.91
16
Multitask KnowledgeMMLU
Accuracy60.17
15
Commonsense ReasoningCommon
Accuracy57.73
4
Showing 8 of 8 rows

Other info

Follow for update