Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Knowledge-Instruct: Effective Continual Pre-training from Limited Data using Instructions

About

While Large Language Models (LLMs) acquire vast knowledge during pre-training, they often lack domain-specific, new, or niche information. Continual pre-training (CPT) attempts to address this gap but suffers from catastrophic forgetting and inefficiencies in low-data regimes. We introduce Knowledge-Instruct, a novel approach to efficiently inject knowledge from limited corpora through pure instruction-tuning. By generating information-dense synthetic instruction data, it effectively integrates new knowledge while preserving general reasoning and instruction-following abilities. Knowledge-Instruct demonstrates superior factual memorization, minimizes catastrophic forgetting, and remains scalable by leveraging synthetic data from relatively small language models. Additionally, it enhances contextual understanding, including complex multi-hop reasoning, facilitating integration with retrieval systems. We validate its effectiveness across diverse benchmarks, including Companies, a new dataset that we release to measure knowledge injection capabilities.

Oded Ovadia, Meni Brief, Rachel Lemberg, Eitam Sheetrit• 2025

Related benchmarks

TaskDatasetResultRank
Legal ReasoningLegalBench CUAD Cardlytics Buffalo Wild Wings PF Hospitality 2023
Accuracy (Cardl)78.6
6
Showing 1 of 1 rows

Other info

Follow for update