Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Federated Bayesian Network Structure Learning with Continuous Optimization

About

Traditionally, Bayesian network structure learning is often carried out at a central site, in which all data is gathered. However, in practice, data may be distributed across different parties (e.g., companies, devices) who intend to collectively learn a Bayesian network, but are not willing to disclose information related to their data owing to privacy or security concerns. In this work, we present a federated learning approach to estimate the structure of Bayesian network from data that is horizontally partitioned across different parties. We develop a distributed structure learning method based on continuous optimization, using the alternating direction method of multipliers (ADMM), such that only the model parameters have to be exchanged during the optimization process. We demonstrate the flexibility of our approach by adopting it for both linear and nonlinear cases. Experimental results on synthetic and real datasets show that it achieves an improved performance over the other methods, especially when there is a relatively large number of clients and each has a limited sample size.

Ignavier Ng, Kun Zhang• 2021

Related benchmarks

TaskDatasetResultRank
Nonlinear Temporal Dynamics PredictionHAI Client (P1)
Loss1.1815
4
Nonlinear Temporal Dynamics PredictionHAI Client (P2)
Loss0.864
4
Nonlinear Temporal Dynamics PredictionHAI Client P3
Loss1.1262
4
Nonlinear Temporal Dynamics PredictionHAI Client Average
Loss1.0572
4
Bayesian Network Structure LearningHeterogeneous synthetic data d=20
Communication Cost (MB)5.12
1
Bayesian Network Structure LearningHeterogeneous synthetic data d=200
Communication cost (MB)5.12e+3
1
Showing 6 of 6 rows

Other info

Follow for update