Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SGM: Safety Glasses for Multimodal Large Language Models via Neuron-Level Detoxification

About

Disclaimer: Samples in this paper may be harmful and cause discomfort. Multimodal large language models (MLLMs) enable multimodal generation but inherit toxic, biased, and NSFW signals from weakly curated pretraining corpora, causing safety risks, especially under adversarial triggers that late, opaque training-free detoxification methods struggle to handle. We propose SGM, a white-box neuron-level multimodal intervention that acts like safety glasses for toxic neurons: it selectively recalibrates a small set of toxic expert neurons via expertise-weighted soft suppression, neutralizing harmful cross-modal activations without any parameter updates. We establish MM-TOXIC-QA, a multimodal toxicity evaluation framework, and compare SGM with existing detoxification techniques. Experiments on open-source MLLMs show that SGM mitigates toxicity in standard and adversarial conditions, cutting harmful rates from 48.2\% to 2.5\% while preserving fluency and multimodal reasoning. SGM is extensible, and its combined defenses, denoted as SGM*, integrate with existing detoxification methods for stronger safety performance, providing an interpretable, low-cost solution for toxicity-controlled multimodal generation.

Hongbo Wang, MaungMaung AprilPyone, Isao Echizen• 2025

Related benchmarks

TaskDatasetResultRank
Human Fluency EvaluationHUMANITY
Generation Score9.1
12
General EvaluationMM-Vet
REC36.1
12
Harmful Rate EvaluationMM-SafetyBench OCR (test)
Illegal Activity Rate0.00e+0
10
Safety EvaluationMM-SafetyBench SD 1.0
Illegal Activity Score13.8
5
Showing 4 of 4 rows

Other info

Follow for update