Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Counterfactual Representation Learning with Balancing Weights

About

A key to causal inference with observational data is achieving balance in predictive features associated with each treatment type. Recent literature has explored representation learning to achieve this goal. In this work, we discuss the pitfalls of these strategies - such as a steep trade-off between achieving balance and predictive power - and present a remedy via the integration of balancing weights in causal learning. Specifically, we theoretically link balance to the quality of propensity estimation, emphasize the importance of identifying a proper target population, and elaborate on the complementary roles of feature balancing and weight adjustments. Using these concepts, we then develop an algorithm for flexible, scalable and accurate estimation of causal effects. Finally, we show how the learned weighted representations may serve to facilitate alternative causal learning procedures with appealing statistical features. We conduct an extensive set of experiments on both synthetic examples and standard benchmarks, and report encouraging results relative to state-of-the-art baselines.

Serge Assaad, Shuxi Zeng, Chenyang Tao, Shounak Datta, Nikhil Mehta, Ricardo Henao, Fan Li, Lawrence Carin• 2020

Related benchmarks

TaskDatasetResultRank
CATE estimationIHDP 100 train test splits (out-sample)
ERout3.52
53
Policy Error Rate EstimationHC-MNIST (out-sample)
Error Rate (Out-Sample)13.62
33
Policy Decision MakingSynthetic d_phi=2 out-sample
Policy Error Rate (ER_out)7.44
13
Policy Decision MakingSynthetic (d_phi=1) out-sample
Error Rate (ER_out)34.97
13
Showing 4 of 4 rows

Other info

Follow for update