Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Can Large Language Models Address Open-Target Stance Detection?

About

Stance detection (SD) identifies the text position towards a target, typically labeled as favor, against, or none. We introduce Open-Target Stance Detection (OTSD), the most realistic task where targets are neither seen during training nor provided as input. We evaluate Large Language Models (LLMs) from GPT, Gemini, Llama, and Mistral families, comparing their performance to the only existing work, Target-Stance Extraction (TSE), which benefits from predefined targets. Unlike TSE, OTSD removes the dependency of a predefined list, making target generation and evaluation more challenging. We also provide a metric for evaluating target quality that correlates well with human judgment. Our experiments reveal that LLMs outperform TSE in target generation, both when the real target is explicitly and not explicitly mentioned in the text. Similarly, LLMs overall surpass TSE in stance detection for both explicit and non-explicit cases. However, LLMs struggle in both target generation and stance detection when the target is not explicit.

Abu Ubaida Akash, Ahmed Fahmy, Amine Trabelsi• 2024

Related benchmarks

TaskDatasetResultRank
Stance DetectionVAST Explicit (test)
SC49.84
18
Stance DetectionVAST Non-explicit (test)
SC0.375
18
Target GenerationVAST Explicit (test)
SS (SemSim)0.89
18
Target GenerationVAST Non-explicit (test)
SemSim Score (SS)0.85
18
Stance DetectionEZSTANCE Explicit (test)
SC (%)0.5146
16
Stance DetectionEZSTANCE Non-explicit (test)
SC (%)48.53
16
Target GenerationEZSTANCE Explicit (test)
SS (SemSim)89
16
Target GenerationEZSTANCE Non-explicit (test)
SS (SemSim)0.85
16
Showing 8 of 8 rows

Other info

Code

Follow for update