Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PANDA: Preference Adaptation for Enhancing Domain-Specific Abilities of LLMs

About

While Large language models (LLMs) have demonstrated considerable capabilities across various natural language tasks, they often fall short of the performance achieved by domain-specific state-of-the-art models. One potential approach to enhance domain-specific capabilities of LLMs involves fine-tuning them using corresponding datasets. However, this method can be both resource and time-intensive, and not applicable to closed-source commercial LLMs. In this paper, we propose Preference Adaptation for Enhancing Domain-specific Abilities of LLMs (PANDA), a method designed to augment the domain-specific capabilities of LLMs by leveraging insights from the response preference of expert models without requiring fine-tuning. Our experimental results reveal that PANDA significantly enhances the domain-specific ability of LLMs on text classification and interactive decision tasks. Moreover, LLM with PANDA even outperforms the expert model that being learned on 4 tasks of ScienceWorld. This finding highlights the potential of exploring tuning-free approaches to achieve weak-to-strong generalization.

An Liu, Zonghan Yang, Zhenhe Zhang, Qingyuan Hu, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Yang Liu• 2024

Related benchmarks

TaskDatasetResultRank
Twitter Text ClassificationTweetEval latest (test)
Emoji0.201
9
Scientific Reasoning in Text-based EnvironmentsScienceWorld (test)
Task 1-1 Score1
7
Showing 2 of 2 rows

Other info

Code

Follow for update