Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DANCE: Dynamic, Available, Neighbor-gated Condensation for Federated Text-Attributed Graphs

About

Federated graph learning (FGL) enables collaborative training on graph data across multiple clients. With the rise of large language models (LLMs), textual attributes in FGL graphs are gaining attention. Text-attributed graph federated learning (TAG-FGL) improves FGL by explicitly leveraging LLMs to process and integrate these textual features. However, current TAG-FGL methods face three main challenges: \textbf{(1) Overhead.} LLMs for processing long texts incur high token and computation costs. To make TAG-FGL practical, we introduce graph condensation (GC) to reduce computation load, but this choice also brings new issues. \textbf{(2) Suboptimal.} To reduce LLM overhead, we introduce GC into TAG-FGL by compressing multi-hop texts/neighborhoods into a condensed core with fixed LLM surrogates. However, this one-shot condensation is often not client-adaptive, leading to suboptimal performance. \textbf{(3) Interpretability.} LLM-based condensation further introduces a black-box bottleneck: summaries lack faithful attribution and clear grounding to specific source spans, making local inspection and auditing difficult. To address the above issues, we propose \textbf{DANCE}, a new TAG-FGL paradigm with GC. To improve \textbf{suboptimal} performance, DANCE performs round-wise, model-in-the-loop condensation refresh using the latest global model. To enhance \textbf{interpretability}, DANCE preserves provenance by storing locally inspectable evidence packs that trace predictions to selected neighbors and source text spans. Across 8 TAG datasets, DANCE improves accuracy by \textbf{2.33\%} at an \textbf{8\%} condensation ratio, with \textbf{33.42\%} fewer tokens than baselines.

Zekai Chen, Haodong Lu, Xunkai Li, Henan Sun, Jia Li, Hongchao Qin, Rong-Hua Li, Guoren Wang• 2026

Related benchmarks

TaskDatasetResultRank
Node ClassificationCora
Accuracy88.87
885
Node ClassificationCiteseer
Accuracy80.19
275
Node ClassificationwikiCS
Accuracy84.26
198
Node Classificationamazon-ratings
Accuracy47.84
138
Node ClassificationREDDIT
Accuracy68.45
66
Node ClassificationarXiv
Accuracy78.26
41
Node ClassificationInstagram
Accuracy65.17
23
Node ClassificationChildren
Accuracy48.25
19
Showing 8 of 8 rows

Other info

Follow for update