Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Rethinking Semantic Parsing for Large Language Models: Enhancing LLM Performance with Semantic Hints

About

Semantic Parsing aims to capture the meaning of a sentence and convert it into a logical, structured form. Previous studies show that semantic parsing enhances the performance of smaller models (e.g., BERT) on downstream tasks. However, it remains unclear whether the improvements extend similarly to LLMs. In this paper, our empirical findings reveal that, unlike smaller models, directly adding semantic parsing results into LLMs reduces their performance. To overcome this, we propose SENSE, a novel prompting approach that embeds semantic hints within the prompt. Experiments show that SENSE consistently improves LLMs' performance across various tasks, highlighting the potential of integrating semantic information to improve LLM capabilities.

Kaikai An, Shuzheng Si, Helan Hu, Haozhe Zhao, Yuchi Wang, Qingyan Guo, Baobao Chang• 2024

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (test val)
MRPC Accuracy76.47
59
Machine TranslationWMT En-De 2022 (test)
COMET86.44
25
Machine TranslationWMT EN-DE 2022
COMET2286.65
16
ParaphrasingQQP
Semantic Faithfulness90.26
11
Machine TranslationWMT EN-ZH 2022
COMET2288.06
6
Machine TranslationWMT ZH-EN 2022
COMET2280.6
6
Text SimplificationTrukCorpus
BLEU63.42
2
Text SimplificationGoogleComp
BLEU14.31
2
Showing 8 of 8 rows

Other info

Follow for update