Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Autonomous Chain-of-Thought Distillation for Graph-Based Fraud Detection

About

Graph-based fraud detection on text-attributed graphs (TAGs) requires jointly modeling rich textual semantics and relational dependencies. However, existing LLM-enhanced GNN approaches are constrained by predefined prompting and decoupled training pipelines, limiting reasoning autonomy and weakening semantic-structural alignment. We propose FraudCoT, a unified framework that advances TAG-based fraud detection through autonomous, graph-aware chain-of-thought (CoT) reasoning and scalable LLM-GNN co-training. To address the limitations of predefined prompts, we introduce a fraud-aware selective CoT distillation mechanism that generates diverse reasoning paths and enhances semantic-structural understanding. These distilled CoTs are integrated into node texts, providing GNNs with enriched, multi-hop semantic and structural cues for fraud detection. Furthermore, we develop an efficient asymmetric co-training strategy that enables end-to-end optimization while significantly reducing the computational cost of naive joint training. Extensive experiments on public and industrial benchmarks demonstrate that FraudCoT achieves up to 8.8% AUPRC improvement over state-of-the-art methods and delivers up to 1,066x speedup in training throughput, substantially advancing both detection performance and efficiency.

Yuan Li, Jun Hu, Bryan Hooi, Bingsheng He, Cheng Chen• 2026

Related benchmarks

TaskDatasetResultRank
Fraud DetectionInstantVideo
Macro F1 Score83.21
15
Fraud DetectionDigitalMusic
Macro F1 Score84.23
15
Fraud DetectionPromotionAbuse
Macro F1 Score77.95
15
Showing 3 of 3 rows

Other info

Follow for update