Re-TASK: Revisiting LLM Tasks from Capability, Skill, and Knowledge Perspectives
About
The Chain-of-Thought (CoT) paradigm has become a pivotal method for solving complex problems with large language models (LLMs). However, its application to domain-specific tasks remains challenging, as LLMs often fail to decompose tasks accurately or execute subtasks effectively. This paper introduces the Re-TASK framework, a novel theoretical model that revisits LLM tasks from capability, skill, and knowledge perspectives, drawing on the principles of Bloom's Taxonomy and Knowledge Space Theory. While CoT provides a workflow-centric perspective on tasks, Re-TASK introduces a Chain-of-Learning (CoL) paradigm that highlights task dependencies on specific capability items, further broken down into their constituent knowledge and skill components. To address CoT failures, we propose a Re-TASK prompting strategy, which strengthens task-relevant capabilities through targeted knowledge injection and skill adaptation. Experiments across diverse domains demonstrate the effectiveness of Re-TASK. In particular, we achieve improvements of 45.00% on Yi-1.5-9B and 24.50% on Llama3-Chinese-8B for legal tasks. These results highlight the potential of Re-TASK to significantly enhance LLM performance and its applicability in specialized domains. We release our code and data at https://github.com/Uylee/Re-TASK.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Financial Question Answering | FinanceIQ | Accuracy (%)73.6 | 27 | |
| Sentencing Prediction | CAIL Law Domain | Accuracy85 | 24 | |
| STEM Task Evaluation | MMLU Math | Accuracy51.81 | 18 | |
| STEM Task Evaluation | MMLU Biology | Accuracy0.8819 | 18 | |
| STEM Task Evaluation | MMLU Physics | Accuracy60.78 | 18 |