Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CARO: Chain-of-Analogy Reasoning Optimization for Robust Content Moderation

About

Current large language models (LLMs), even those explicitly trained for reasoning, often struggle with ambiguous content moderation cases due to misleading "decision shortcuts" embedded in context. Inspired by cognitive psychology insights into expert moderation, we introduce \caro (Chain-of-Analogy Reasoning Optimization), a novel two-stage training framework to induce robust analogical reasoning in LLMs. First, \caro bootstraps analogical reasoning chains via retrieval-augmented generation (RAG) on moderation data and performs supervised fine-tuning (SFT). Second, we propose a customized direct preference optimization (DPO) approach to reinforce analogical reasoning behaviors explicitly. Unlike static retrieval methods, \caro dynamically generates tailored analogical references during inference, effectively mitigating harmful decision shortcuts. Extensive experiments demonstrate that \caro substantially outperforms state-of-the-art reasoning models (DeepSeek R1, QwQ), specialized moderation models (LLaMA Guard), and advanced fine-tuning and retrieval-augmented methods, achieving an average F1 score improvement of 24.9\% on challenging ambiguous moderation benchmarks.

Bingzhe Wu, Haotian Lu, Yuchen Mou• 2026

Related benchmarks

TaskDatasetResultRank
Content ModerationMulti-category Chinese content moderation dataset 1.0 (test)
Politics Accuracy89.7
15
Content ModerationAegis In-Distribution
Pornography Score75
2
Content ModerationOpenAI Out-of-Distribution
Pornography Score82.6
2
Content ModerationToxic-Chat Out-of-Distribution
Pornography Score60.8
2
Showing 4 of 4 rows

Other info

Follow for update