Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Text Detoxification in isiXhosa and Yor\`ub\'a: A Cross-Lingual Machine Learning Approach for Low-Resource African Languages

About

Toxic language is one of the major barrier to safe online participation, yet robust mitigation tools are scarce for African languages. This study addresses this critical gap by investigating automatic text detoxification (toxic to neutral rewriting) for two low-resource African languages, isiXhosa and Yor\`ub\'a. The work contributes a novel, pragmatic hybrid methodology: a lightweight, interpretable TF-IDF and Logistic Regression model for transparent toxicity detection, and a controlled lexicon- and token-guided rewriting component. A parallel corpus of toxic to neutral rewrites, which captures idiomatic usage, diacritics, and code switching, was developed to train and evaluate the model. The detection component achieved stratified K-fold accuracies of 61-72% (isiXhosa) and 72-86% (Yor\`ub\'a), with per-language ROC-AUCs up to 0.88. The rewriting component successfully detoxified all detected toxic sentences while preserving 100% of non-toxic sentences. These results demonstrate that scalable, interpretable machine learning detectors combined with rule-based edits offer a competitive and resource-efficient solution for culturally adaptive safety tooling, setting a new benchmark for low-resource Text Style Transfer (TST) in African languages.

Abayomi O. Agbeyangi• 2026

Related benchmarks

TaskDatasetResultRank
Toxicity ClassificationisiXhosa (Aggregated Stratified K-Fold)
Accuracy63
1
Toxicity ClassificationYorùbá (Aggregated Stratified K-Fold)
Accuracy83
1
Showing 2 of 2 rows

Other info

Follow for update