Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Is Sarcasm Detection A Step-by-Step Reasoning Process in Large Language Models?

About

Elaborating a series of intermediate reasoning steps significantly improves the ability of large language models (LLMs) to solve complex problems, as such steps would evoke LLMs to think sequentially. However, human sarcasm understanding is often considered an intuitive and holistic cognitive process, in which various linguistic, contextual, and emotional cues are integrated to form a comprehensive understanding, in a way that does not necessarily follow a step-by-step fashion. To verify the validity of this argument, we introduce a new prompting framework (called SarcasmCue) containing four sub-methods, viz. chain of contradiction (CoC), graph of cues (GoC), bagging of cues (BoC) and tensor of cues (ToC), which elicits LLMs to detect human sarcasm by considering sequential and non-sequential prompting methods. Through a comprehensive empirical comparison on four benchmarks, we highlight three key findings: (1) CoC and GoC show superior performance with more advanced models like GPT-4 and Claude 3.5, with an improvement of 3.5%. (2) ToC significantly outperforms other methods when smaller LLMs are evaluated, boosting the F1 score by 29.7% over the best baseline. (3) Our proposed framework consistently pushes the state-of-the-art (i.e., ToT) by 4.2%, 2.0%, 29.7%, and 58.2% in F1 scores across four datasets. This demonstrates the effectiveness and stability of the proposed framework.

Ben Yao, Yazhou Zhang, Qiuchi Li, Jing Qin• 2024

Related benchmarks

TaskDatasetResultRank
Sarcasm DetectionIAC V1
Accuracy72.19
24
Sarcasm DetectionIAC V2
Accuracy73.36
24
Sarcasm DetectionSemEval 2018
Accuracy0.7079
24
Showing 3 of 3 rows

Other info

Follow for update