Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Aetheria: A multimodal interpretable content safety framework based on multi-agent debate and collaboration

About

The exponential growth of digital content presents significant challenges for content safety. Current moderation systems, often based on single models or fixed pipelines, exhibit limitations in identifying implicit risks and providing interpretable judgment processes. To address these issues, we propose Aetheria, a multimodal interpretable content safety framework based on multi-agent debate and collaboration.Employing a collaborative architecture of five core agents, Aetheria conducts in-depth analysis and adjudication of multimodal content through a dynamic, mutually persuasive debate mechanism, which is grounded by RAG-based knowledge retrieval.Comprehensive experiments on our proposed benchmark (AIR-Bench) validate that Aetheria not only generates detailed and traceable audit reports but also demonstrates significant advantages over baselines in overall content safety accuracy, especially in the identification of implicit risks. This framework establishes a transparent and interpretable paradigm, significantly advancing the field of trustworthy AI content moderation.

Yuxiang He, Jian Zhao, Yuchen Yuan, Tianle Zhang, Wei Cai, Haojie Cheng, Ziyan Shi, Ming Zhu, Haichuan Tang, Chi Zhang, Xuelong Li• 2025

Related benchmarks

TaskDatasetResultRank
Content ModerationAIR-Bench Text + Image (test)
Precision83
8
Content ModerationAIR-Bench Text Only (test)
Precision92
8
Content ModerationAIR-Bench Image Only (test)
Precision90
8
Showing 3 of 3 rows

Other info

Follow for update