Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Not-in-Perspective: Towards Shielding Google's Perspective API Against Adversarial Negation Attacks

About

The rise of cyberbullying in social media platforms involving toxic comments has escalated the need for effective ways to monitor and moderate online interactions. Existing solutions of automated toxicity detection systems, are based on a machine or deep learning algorithms. However, statistics-based solutions are generally prone to adversarial attacks that contain logic based modifications such as negation in phrases and sentences. In that regard, we present a set of formal reasoning-based methodologies that wrap around existing machine learning toxicity detection systems. Acting as both pre-processing and post-processing steps, our formal reasoning wrapper helps alleviating the negation attack problems and significantly improves the accuracy and efficacy of toxicity scoring. We evaluate different variations of our wrapper on multiple machine learning models against a negation adversarial dataset. Experimental results highlight the improvement of hybrid (formal reasoning and machine-learning) methods against various purely statistical solutions.

Michail S. Alexiou, J. Sukarno Mertoguno• 2026

Related benchmarks

TaskDatasetResultRank
Toxicity DetectionHosseini’s 1st Negated Sentence 1.0 (test)
Toxicity Score29
24
Toxicity DetectionPerspective-based Negated Public (test)
Accuracy84
7
Toxicity DetectionJigsaw Perspective-based Negated Private (test)
Accuracy87
7
Showing 3 of 3 rows

Other info

Follow for update