Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Revisiting Contextual Toxicity Detection in Conversations

About

Understanding toxicity in user conversations is undoubtedly an important problem. Addressing "covert" or implicit cases of toxicity is particularly hard and requires context. Very few previous studies have analysed the influence of conversational context in human perception or in automated detection models. We dive deeper into both these directions. We start by analysing existing contextual datasets and come to the conclusion that toxicity labelling by humans is in general influenced by the conversational structure, polarity and topic of the context. We then propose to bring these findings into computational detection models by introducing and evaluating (a) neural architectures for contextual toxicity detection that are aware of the conversational structure, and (b) data augmentation strategies that can help model contextual toxicity detection. Our results have shown the encouraging potential of neural architectures that are aware of the conversation structure. We have also demonstrated that such models can benefit from synthetic data, especially in the social media domain.

Atijit Anuchitanukul, Julia Ive, Lucia Specia• 2021

Related benchmarks

TaskDatasetResultRank
Toxicity DetectionCAD (full)
F1 Score52.5
11
Toxicity DetectionBBF
F1 Score66.2
11
Toxicity DetectionBAD
F1 Score76.5
11
Toxicity DetectionFBK full
F1 Score41.3
10
Toxicity DetectionFBK flipped
F1 Score22
10
Toxicity DetectionHQR
F1 Score81.1
10
Toxicity DetectionHQG
F1 Score91.1
10
Toxicity DetectionCAD context
F1 Score68.1
10
Showing 8 of 8 rows

Other info

Code

Follow for update