Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Use Your INSTINCT: INSTruction optimization for LLMs usIng Neural bandits Coupled with Transformers

About

Large language models (LLMs) have shown remarkable instruction-following capabilities and achieved impressive performances in various applications. However, the performances of LLMs depend heavily on the instructions given to them, which are typically manually tuned with substantial human efforts. Recent work has used the query-efficient Bayesian optimization (BO) algorithm to automatically optimize the instructions given to black-box LLMs. However, BO usually falls short when optimizing highly sophisticated (e.g., high-dimensional) objective functions, such as the functions mapping an instruction to the performance of an LLM. This is mainly due to the limited expressive power of the Gaussian process (GP) which is used by BO as a surrogate to model the objective function. Meanwhile, it has been repeatedly shown that neural networks (NNs), especially pre-trained transformers, possess strong expressive power and can model highly complex functions. So, we adopt a neural bandit algorithm which replaces the GP in BO by an NN surrogate to optimize instructions for black-box LLMs. More importantly, the neural bandit algorithm allows us to naturally couple the NN surrogate with the hidden representation learned by a pre-trained transformer (i.e., an open-source LLM), which significantly boosts its performance. These motivate us to propose our INSTruction optimization usIng Neural bandits Coupled with Transformers (INSTINCT) algorithm. We perform instruction optimization for ChatGPT and use extensive experiments to show that INSTINCT consistently outperforms baselines in different tasks, e.g., various instruction induction tasks and the task of improving zero-shot chain-of-thought instructions. Our code is available at https://github.com/xqlin98/INSTINCT.

Xiaoqiang Lin, Zhaoxuan Wu, Zhongxiang Dai, Wenyang Hu, Yao Shu, See-Kiong Ng, Patrick Jaillet, Bryan Kian Hsiang Low• 2023

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy92.64
900
Code GenerationHumanEval (test)--
506
Mathematical ReasoningSVAMP
Accuracy81
403
Code GenerationMBPP (test)--
298
Mathematical ReasoningAQUA-RAT
Accuracy54.724
120
Prompt SelectionSelected tasks APE-generated prompt pools APE design (test)
Averaged Performance Rank2.58
44
Question AnsweringHotpotQA (test)--
37
Question AnsweringDROP (test)--
12
Instruction InductionInstruction Induction (test)
Active to Passive97
10
Instruction InductionInstruction Induction (test)
Antonyms0.847
6
Showing 10 of 11 rows

Other info

Follow for update