Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Just KIDDIN: Knowledge Infusion and Distillation for Detection of INdecent Memes

About

Toxicity identification in online multimodal environments remains a challenging task due to the complexity of contextual connections across modalities (e.g., textual and visual). In this paper, we propose a novel framework that integrates Knowledge Distillation (KD) from Large Visual Language Models (LVLMs) and knowledge infusion to enhance the performance of toxicity detection in hateful memes. Our approach extracts sub-knowledge graphs from ConceptNet, a large-scale commonsense Knowledge Graph (KG) to be infused within a compact VLM framework. The relational context between toxic phrases in captions and memes, as well as visual concepts in memes enhance the model's reasoning capabilities. Experimental results from our study on two hate speech benchmark datasets demonstrate superior performance over the state-of-the-art baselines across AU-ROC, F1, and Recall with improvements of 1.1%, 7%, and 35%, respectively. Given the contextual complexity of the toxicity detection task, our approach showcases the significance of learning from both explicit (i.e. KG) as well as implicit (i.e. LVLMs) contextual cues incorporated through a hybrid neurosymbolic approach. This is crucial for real-world applications where accurate and scalable recognition of toxic content is critical for creating safer online environments.

Rahul Garg, Trilok Padhi, Hemang Jain, Ugur Kursuncu, Ponnurangam Kumaraguru• 2024

Related benchmarks

TaskDatasetResultRank
Hateful meme classificationHatefulMemes (seen)
Accuracy78.7
11
Meme ClassificationHarMeme Dataset (test)
Accuracy85.03
11
Hateful meme classificationHatefulMemes (unseen)
Accuracy77
11
Meme Intensity PredictionHarMeme (test)
Accuracy0.8107
6
Meme Target IdentificationHarMeme (test)
Accuracy0.7742
6
Showing 5 of 5 rows

Other info

Follow for update