Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Routing-Aware Explanations for Mixture of Experts Graph Models in Malware Detection

About

Mixture-of-Experts (MoE) offers flexible graph reasoning by combining multiple views of a graph through a learned router. We investigate routing-aware explanations for MoE graph models in malware detection using control flow graphs (CFGs). Our architecture builds diversity at two levels. At the node level, each layer computes multiple neighborhood statistics and fuses them with an MLP, guided by a degree reweighting factor rho and a pooling choice lambda in {mean, std, max}, producing distinct node representations that capture complementary structural cues in CFGs. At the readout level, six experts, each tied to a specific (rho, lambda) view, output graph-level logits that the router weights into a final prediction. Post-hoc explanations are generated with edge-level attributions per expert and aggregated using the router gates so the rationale reflects both what each expert highlights and how strongly it is selected. Evaluated against single-expert GNN baselines such as GCN, GIN, and GAT on the same CFG dataset, the proposed MoE achieves strong detection accuracy while yielding stable, faithful attributions under sparsity-based perturbations. The results indicate that making the router explicit and combining multi-statistic node encoding with expert-level diversity can improve the transparency of MoE decisions for malware analysis.

Hossein Shokouhinejad, Roozbeh Razavi-Far, Griffin Higgins, Ali.A Ghorbani• 2026

Related benchmarks

TaskDatasetResultRank
Malware DetectionBODMAS, PMML, and DikeDataset (test)
Benign Precision0.9623
11
Showing 1 of 1 rows

Other info

Follow for update