Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Detecting Harmful Memes and Their Targets

About

Among the various modes of communication in social media, the use of Internet memes has emerged as a powerful means to convey political, psychological, and socio-cultural opinions. Although memes are typically humorous in nature, recent days have witnessed a proliferation of harmful memes targeted to abuse various social entities. As most harmful memes are highly satirical and abstruse without appropriate contexts, off-the-shelf multimodal models may not be adequate to understand their underlying semantics. In this work, we propose two novel problem formulations: detecting harmful memes and the social entities that these harmful memes target. To this end, we present HarMeme, the first benchmark dataset, containing 3,544 memes related to COVID-19. Each meme went through a rigorous two-stage annotation process. In the first stage, we labeled a meme as very harmful, partially harmful, or harmless; in the second stage, we further annotated the type of target(s) that each harmful meme points to: individual, organization, community, or society/general public/other. The evaluation results using ten unimodal and multimodal models highlight the importance of using multimodal signals for both tasks. We further discuss the limitations of these models and we argue that more research is needed to address these problems.

Shraman Pramanick, Dimitar Dimitrov, Rituparna Mukherjee, Shivam Sharma, Md. Shad Akhtar, Preslav Nakov, Tanmoy Chakraborty• 2021

Related benchmarks

TaskDatasetResultRank
Harmful Meme DetectionFHM
Accuracy59.14
29
Harmful Meme DetectionMAMI
Accuracy63.2
19
Harmful Meme DetectionHarM
Accuracy73.21
13
Harmful Meme DetectionHarm-C (test)
Accuracy73.24
10
Harmful Meme DetectionFHM (test)
Accuracy59.14
10
Harmful Meme DetectionHarm-P (test)
Accuracy78.26
10
Multi-class classificationHarMeme Multi-class
Macro F169.65
8
Showing 7 of 7 rows

Other info

Follow for update