Detecting Winning Arguments with Large Language Models and Persuasion Strategies
About
Detecting persuasion in argumentative text is a challenging task with important implications for understanding human communication. This work investigates the role of persuasion strategies - such as Attack on reputation, Distraction, and Manipulative wording - in determining the persuasiveness of a text. We conduct experiments on three annotated argument datasets: Winning Arguments (built from the Change My View subreddit), Anthropic/Persuasion, and Persuasion for Good. Our approach leverages large language models (LLMs) with a Multi-Strategy Persuasion Scoring approach that guides reasoning over six persuasion strategies. Results show that strategy-guided reasoning improves the prediction of persuasiveness. To better understand the influence of content, we organize the Winning Argument dataset into broad discussion topics and analyze performance across them. We publicly release this topic-annotated version of the dataset to facilitate future research. Overall, our methodology demonstrates the value of structured, strategy-aware prompting for enhancing interpretability and robustness in argument quality assessment.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Persuasion classification | Winning Arguments (test) | Accuracy64.53 | 43 | |
| Donation amount prediction | Persuasion for Good (test) | MSE21.41 | 5 | |
| Post-argument rating regression | Anthropic dataset September 2024 | MSE0.67 | 5 | |
| Argument Persuasiveness Prediction | Anthropic | RMSE0.82 | 3 | |
| Argument Persuasiveness Prediction | Persuasion for Good | RMSE4.63 | 3 |