MIND: A Multi-agent Framework for Zero-shot Harmful Meme Detection
About
The rapid expansion of memes on social media has highlighted the urgent need for effective approaches to detect harmful content. However, traditional data-driven approaches struggle to detect new memes due to their evolving nature and the lack of up-to-date annotated data. To address this issue, we propose MIND, a multi-agent framework for zero-shot harmful meme detection that does not rely on annotated data. MIND implements three key strategies: 1) We retrieve similar memes from an unannotated reference set to provide contextual information. 2) We propose a bi-directional insight derivation mechanism to extract a comprehensive understanding of similar memes. 3) We then employ a multi-agent debate mechanism to ensure robust decision-making through reasoned arbitration. Extensive experiments on three meme datasets demonstrate that our proposed framework not only outperforms existing zero-shot approaches but also shows strong generalization across different model architectures and parameter scales, providing a scalable solution for harmful meme detection. The code is available at https://github.com/destroy-lonely/MIND.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Harmful Meme Detection | FHM | Accuracy60.8 | 29 | |
| Harmful Meme Detection | MAMI | Accuracy68.9 | 19 | |
| Harmful Meme Detection | HarM | Accuracy68.93 | 13 | |
| Harmful Meme Detection | GOAT-Bench In-Domain | Racism F169.1 | 11 | |
| Harmful Meme Detection | MAMI (test) | Accuracy68.9 | 10 | |
| Harmful Meme Detection | GOAT-Bench (Out-Of-Domain) | Racism F147.4 | 7 |