Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Aggregating Diverse Cue Experts for AI-Generated Image Detection

About

The rapid emergence of image synthesis models poses challenges to the generalization of AI-generated image detectors. However, existing methods often rely on model-specific features, leading to overfitting and poor generalization. In this paper, we introduce the Multi-Cue Aggregation Network (MCAN), a novel framework that integrates different yet complementary cues in a unified network. MCAN employs a mixture-of-encoders adapter to dynamically process these cues, enabling more adaptive and robust feature representation. Our cues include the input image itself, which represents the overall content, and high-frequency components that emphasize edge details. Additionally, we introduce a Chromatic Inconsistency (CI) cue, which normalizes intensity values and captures noise information introduced during the image acquisition process in real images, making these noise patterns more distinguishable from those in AI-generated content. Unlike prior methods, MCAN's novelty lies in its unified multi-cue aggregation framework, which integrates spatial, frequency-domain, and chromaticity-based information for enhanced representation learning. These cues are intrinsically more indicative of real images, enhancing cross-model generalization. Extensive experiments on the GenImage, Chameleon, and UniversalFakeDetect benchmark validate the state-of-the-art performance of MCAN. In the GenImage dataset, MCAN outperforms the best state-of-the-art method by up to 7.4% in average ACC across eight different image generators.

Lei Tan, Shuwei Li, Mohan Kankanhalli, Robby T. Tan• 2026

Related benchmarks

TaskDatasetResultRank
Generated Image DetectionGenImage (test)
Average Accuracy96.9
103
AI-generated image detectionChameleon (test)
Accuracy69.61
54
Deepfake DetectionUniversalFakeDetect 1.0 (test)
Accuracy (ProGAN)100
42
Showing 3 of 3 rows

Other info

Follow for update