Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Unlocking Data Value in Finance: A Study on Distillation and Difficulty-Aware Training

About

Large Language Models (LLMs) have demonstrated strong general capabilities, yet their deployment in finance remains challenging due to dense domain-specific terminology, stringent numerical reasoning requirements, and low tolerance for factual errors. We conduct a controlled empirical study showing that in specialized vertical domains, performance is largely determined by the quality and difficulty/verifiability profile of post-training data. We introduce \textbf{ODA-Fin-SFT-318k}, constructed via multi-stage distillation and verification to produce high-quality Chain-of-Thought supervision, and \textbf{ODA-Fin-RL-12k}, curated for hard-but-verifiable tasks that balance reward precision and task diversity. Using standard SFT and RL pipelines, we show that high-quality CoT distillation establishes a robust foundation during SFT, while difficulty- and verifiability-aware sampling improves RL generalization. Evaluated on nine benchmarks spanning general financial tasks, sentiment analysis, and numerical reasoning, our ODA-Fin-RL-8B consistently surpasses open-source state-of-the-art (SOTA) financial LLMs of comparable size. We release our ODA-Fin-SFT-318k and ODA-Fin-RL-12k datasets, along with trained models to advance data-centric financial AI research.

Chuxue Cao, Honglin Lin, Zhanping Zhong, Xin Gao, Mengzhang Cai, Conghui He, Sirui Han, Lijun Wu• 2026

Related benchmarks

TaskDatasetResultRank
Sentiment AnalysisFOMC--
44
Financial ReasoningFinQA
Accuracy73.3
33
Financial ReasoningConvFinQA
Accuracy80.4
23
Sentiment AnalysisFPB
Weighted F10.834
15
Sentiment AnalysisHeadlines
Weighted F178.5
15
Financial KnowledgeFinanceIQ
Accuracy74.2
15
Financial KnowledgeFineval
Accuracy77
15
Numerical ReasoningTATQA
Accuracy89.3
14
Financial KnowledgeFinova
Accuracy54.6
14
Showing 9 of 9 rows

Other info

Follow for update