Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Preserving LLM Capabilities through Calibration Data Curation: From Analysis to Optimization

About

Post-training compression has been a widely employed approach to scale down large language model (LLM) and facilitate efficient inference. In various proposed compression methods, including pruning and quantization, calibration data plays a vital role by informing the weight importance and activation dynamic ranges. However, how calibration data impacts the LLM capability after compression is less explored. Few of the existing works, though recognizing the significance of this study, only investigate the language modeling or commonsense reasoning performance degradation from limited angles, like the data sources or sample amounts. More systematic research is still needed to examine the impacts on different LLM capabilities in terms of compositional properties and domain correspondence of calibration data. In this work, we aim at bridging this gap and further analyze underlying influencing mechanisms from the activation pattern perspective. Especially, we explore the calibration data's impacts on high-level complex reasoning capabilities, like math problem solving and code generation. Delving into the underlying mechanism, we find that the representativeness and diversity in activation space more fundamentally determine the quality of calibration data. Finally, we propose a calibration data curation framework based on such observations and analysis, enhancing the performance of existing post-training compression methods on preserving critical LLM capabilities. Our code is provided in \href{https://github.com/BokwaiHo/COLA.git}{Link}.

Bowei He, Lihao Yin, Huiling Zhen, Shuqi Liu, Han Wu, Xiaokun Zhang, Mingxuan Yuan, Chen Ma• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy67.56
1891
Commonsense ReasoningWinoGrande
Accuracy60.03
1085
Language ModelingWiki, C4, and Pile
Average Perplexity7.35
52
Language UnderstandingMMLU-M
Accuracy24.44
29
Multi-task Language UnderstandingMMLU-M
Accuracy24.44
26
Mathematical ReasoningGSM8K
Accuracy75.82
21
Aggregate Model PerformanceSummary Average
Accuracy62.47
4
Reading ComprehensionBoolQ
BoolQ Accuracy72.57
3
Mathematical ReasoningGSM8K
Accuracy68.46
3
Question AnsweringOpenBookQA
Accuracy45.4
2
Showing 10 of 10 rows

Other info

Follow for update