Calibrating Data to Sensitivity in Private Data Analysis
About
We present an approach to differentially private computation in which one does not scale up the magnitude of noise for challenging queries, but rather scales down the contributions of challenging records. While scaling down all records uniformly is equivalent to scaling up the noise magnitude, we show that scaling records non-uniformly can result in substantially higher accuracy by bypassing the worst-case requirements of differential privacy for the noise magnitudes. This paper details the data analysis platform wPINQ, which generalizes the Privacy Integrated Query (PINQ) to weighted datasets. Using a few simple operators (including a non-uniformly scaling Join operator) wPINQ can reproduce (and improve) several recent results on graph analysis and introduce new generalizations (e.g., counting triangles with given degrees). We also show how to integrate probabilistic inference techniques to synthesize datasets respecting more complicated (and less easily interpreted) measurements.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Regression | Communities and Crime 1990 US Census / 1990 US LEMAS / 1995 FBI UCR (test (20%)) | MSE (Mean)0.0182 | 78 | |
| Regression | California Housing Standard (test) | MSE0.5922 | 78 | |
| Regression | Criteo Sponsored Search Conversion Log (test) | MSE3.12e+3 | 78 | |
| Collaborative Filtering Recommendation | Synthetic Recommender Dataset 300 users, 200 items, 10% sparsity (test) | RMSE1.057 | 16 | |
| Cohort Analytics | Cohort Analytics 1.0 (Evaluation) | P-VaR 0.951.34 | 13 | |
| Training Process | Criteo Sponsored Search Conversion Log (train) | Training Time3.4084 | 5 | |
| Training Process | California Housing (train) | Training Time0.2798 | 5 | |
| Training Process | Communities and Crime (train) | Training Time (s)0.2881 | 5 |