Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Decoder Tuning: Efficient Language Understanding as Decoding

About

With the evergrowing sizes of pre-trained models (PTMs), it has been an emerging practice to only provide the inference APIs for users, namely model-as-a-service (MaaS) setting. To adapt PTMs with model parameters frozen, most current approaches focus on the input side, seeking for powerful prompts to stimulate models for correct answers. However, we argue that input-side adaptation could be arduous due to the lack of gradient signals and they usually require thousands of API queries, resulting in high computation and time costs. In light of this, we present Decoder Tuning (DecT), which in contrast optimizes task-specific decoder networks on the output side. Specifically, DecT first extracts prompt-stimulated output scores for initial predictions. On top of that, we train an additional decoder network on the output representations to incorporate posterior data knowledge. By gradient-based optimization, DecT can be trained within several seconds and requires only one PTM query per sample. Empirically, we conduct extensive natural language understanding experiments and show that DecT significantly outperforms state-of-the-art algorithms with a $200\times$ speed-up.

Ganqu Cui, Wentao Li, Ning Ding, Longtao Huang, Zhiyuan Liu, Maosong Sun• 2022

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceRTE
Accuracy69.2
367
Natural Language InferenceSNLI
Accuracy69.7
174
Topic ClassificationAG-News
Accuracy86.4
173
Sentiment AnalysisSST-2
Accuracy92.7
156
Topic ClassificationDBpedia
Accuracy94.6
117
Natural Language InferenceMNLI (matched)
Accuracy55.3
110
Natural Language InferenceMNLI (mismatched)
Accuracy56.8
68
Sentiment AnalysisIMDB
Accuracy92.1
57
Topic ClassificationYahoo
Accuracy64.2
42
Topic ClassificationYahoo (test)
Accuracy71.3
36
Showing 10 of 17 rows

Other info

Code

Follow for update