Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

HateBERT: Retraining BERT for Abusive Language Detection in English

About

In this paper, we introduce HateBERT, a re-trained BERT model for abusive language detection in English. The model was trained on RAL-E, a large-scale dataset of Reddit comments in English from communities banned for being offensive, abusive, or hateful that we have collected and made available to the public. We present the results of a detailed comparison between a general pre-trained language model and the abuse-inclined version obtained by retraining with posts from the banned communities on three English datasets for offensive, abusive language and hate speech detection tasks. In all datasets, HateBERT outperforms the corresponding general BERT model. We also discuss a battery of experiments comparing the portability of the generic pre-trained language model and its corresponding abusive language-inclined counterpart across the datasets, indicating that portability is affected by compatibility of the annotated phenomena.

Tommaso Caselli, Valerio Basile, Jelena Mitrovi\'c, Michael Granitzer• 2020

Related benchmarks

TaskDatasetResultRank
Toxicity DetectionHateXplain
AUC72.63
21
Hate Speech DetectionSBIC
Total F151
11
Hate Speech DetectionCREHate
Total F1 Score49
11
Text ClassificationSBIC
Total Metric-0.0012
11
Text ClassificationCREHate
Total Score-5.00e-4
11
Safety ClassificationDiaSafety (test)
AUROC50.76
8
Toxicity DetectionIHC
Accuracy45.86
8
Abusive language detectionAbusEval (test)
Macro F176.5
3
Abusive language detectionOffensEval 2019 (test)
Macro F10.809
3
Abusive language detectionHatEval (test)
Macro F10.516
3
Showing 10 of 10 rows

Other info

Code

Follow for update