Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Prompt and Parameter Co-Optimization for Large Language Models

About

Prompt optimization and fine-tuning are two major approaches to improve the performance of Large Language Models (LLMs). They enhance the capabilities of LLMs from complementary perspectives: the former through explicit natural language, and the latter through implicit parameter updates. However, prior work has typically studied them in isolation, leaving their synergistic potential largely underexplored. To bridge this gap, in this paper, we introduce MetaTuner, a novel framework that jointly integrates prompt optimization and fine-tuning for LLM training. Specifically, we introduce two neural networks to generate prompts and parameters, respectively, while allowing them to share a common bottom encoding layer to enable knowledge sharing. By the guidance of the final supervised signals, our framework is optimized to discover the optimal combinations between the prompts and parameters. Given that prompt learning involves discrete optimization while fine-tuning operates in a continuous parameter space, we design a supervised regularization loss to train our framework effectively. Extensive experiments across diverse benchmarks show that our method consistently outperforms the baselines.

Xiaohe Bo, Rui Li, Zexu Sun, Quanyu Dai, Zeyu Zhang, Zihang Tian, Xu Chen, Zhenhua Dong• 2025

Related benchmarks

TaskDatasetResultRank
Multi-hop Question AnsweringHotpotQA (test)
F159.05
255
ReasoningCheckmate-in-One
Accuracy21.43
57
Grade School Math Word Problem SolvingGSM8K (test)
Accuracy78.92
38
Commonsense Question AnsweringCosmosQA (test)
EM92.25
24
Mathematical ReasoningMATH (test)
Exact Match (EM)48.67
24
Showing 5 of 5 rows

Other info

Follow for update