Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

JsonTuning: Towards Generalizable, Robust, and Controllable Instruction Tuning

About

Instruction tuning is vital for enhancing the performance of large language models (LLMs), but existing text-to-text methods, referred to as TextTuning, struggle with issues such as generalization, robustness, and controllability due to their lack of explicit task structures. We introduce JsonTuning, a structure-to-structure approach that uses JSON structures to represent tasks. This method improves generalization by clarifying task elements and their relations, boosts robustness by minimizing ambiguity, and enhances controllability by allowing precise control over outputs. We conduct an extensive comparative analysis between JsonTuning and TextTuning using various language models and benchmarks. Our findings reveal that JsonTuning consistently surpasses TextTuning in terms of performance, robustness, and controllability across different scenarios. By overcoming the limitations of TextTuning, JsonTuning demonstrates significant potential for developing more effective and reliable LLMs capable of handling diverse scenarios.

Chang Gao, Wenxuan Zhang, Guizhen Chen, Wai Lam• 2023

Related benchmarks

TaskDatasetResultRank
ReasoningBBH
Accuracy46.77
507
Multitask Language UnderstandingMMLU (test)
Accuracy59.24
303
Event extractionEE
Event Trigger F17.67
14
Named Entity RecognitionInstruc-tUIE NER
F1 Score53.15
14
Relation ExtractionInstruc-tUIE RE
Relation Boundary F127.33
14
Text-to-SQLNL2SQL
Execution Accuracy53.2
14
Showing 6 of 6 rows

Other info

Code

Follow for update