Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reasoning with Sarcasm by Reading In-between

About

Sarcasm is a sophisticated speech act which commonly manifests on social communities such as Twitter and Reddit. The prevalence of sarcasm on the social web is highly disruptive to opinion mining systems due to not only its tendency of polarity flipping but also usage of figurative language. Sarcasm commonly manifests with a contrastive theme either between positive-negative sentiments or between literal-figurative scenarios. In this paper, we revisit the notion of modeling contrast in order to reason with sarcasm. More specifically, we propose an attention-based neural model that looks in-between instead of across, enabling it to explicitly model contrast and incongruity. We conduct extensive experiments on six benchmark datasets from Twitter, Reddit and the Internet Argument Corpus. Our proposed model not only achieves state-of-the-art performance on all datasets but also enjoys improved interpretability.

Yi Tay, Luu Anh Tuan, Siu Cheung Hui, Jian Su• 2018

Related benchmarks

TaskDatasetResultRank
Sarcasm DetectionSemEval 2018
Accuracy65.2
24
Sarcasm DetectionIAC V2
Accuracy73.6
24
Sarcasm DetectionIAC V1
Accuracy65
24
Multimodal Sarcasm DetectionMUStARD original (speaker-dependent)
Precision64.7
15
Multimodal Sarcasm DetectionMUStARD speaker-independent original
F1 Score54
13
Sarcasm DetectionIAC-V1, IAC-V2, and SemEval-2018 Average
Accuracy0.679
10
Showing 6 of 6 rows

Other info

Follow for update