Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

HatePrototypes: Interpretable and Transferable Representations for Implicit and Explicit Hate Speech Detection

About

Optimization of offensive content moderation models for different types of hateful messages is typically achieved through continued pre-training or fine-tuning on new hate speech benchmarks. However, existing benchmarks mainly address explicit hate toward protected groups and often overlook implicit or indirect hate, such as demeaning comparisons, calls for exclusion or violence, and subtle discriminatory language that still causes harm. While explicit hate can often be captured through surface features, implicit hate requires deeper, full-model semantic processing. In this work, we question the need for repeated fine-tuning and analyze the role of HatePrototypes, class-level vector representations derived from language models optimized for hate speech detection and safety moderation. We find that these prototypes, built from as few as 50 examples per class, enable cross-task transfer between explicit and implicit hate, with interchangeable prototypes across benchmarks. Moreover, we show that parameter-free early exiting with prototypes is effective for both hate types. We release the code, prototype resources, and evaluation scripts to support future research on efficient and transferable hate speech detection.

Irina Proskurina, Marc-Antoine Carpentier, Julien Velcin• 2025

Related benchmarks

TaskDatasetResultRank
Hate Speech DetectionHateXplain (test)
Macro F1 Score77.56
36
Hate Speech DetectionIHC (test)
Avg Exit10.5
12
Hate Speech DetectionSBIC (test)
Avg Exit10.48
12
Hate Speech DetectionOLID (test)
AvgExit10.71
12
Showing 4 of 4 rows

Other info

Follow for update