Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adversarial Counterfactual Environment Model Learning

About

A good model for action-effect prediction, named environment model, is important to achieve sample-efficient decision-making policy learning in many domains like robot control, recommender systems, and patients' treatment selection. We can take unlimited trials with such a model to identify the appropriate actions so that the costs of queries in the real world can be saved. It requires the model to handle unseen data correctly, also called counterfactual data. However, standard data fitting techniques do not automatically achieve such generalization ability and commonly result in unreliable models. In this work, we introduce counterfactual-query risk minimization (CQRM) in model learning for generalizing to a counterfactual dataset queried by a specific target policy. Since the target policies can be various and unknown in policy learning, we propose an adversarial CQRM objective in which the model learns on counterfactual data queried by adversarial policies, and finally derive a tractable solution GALILEO. We also discover that adversarial CQRM is closely related to the adversarial model learning, explaining the effectiveness of the latter. We apply GALILEO in synthetic tasks and a real-world application. The results show that GALILEO makes accurate predictions on counterfactual data and thus significantly improves policies in real-world testing.

Xiong-Hui Chen, Yang Yu, Zheng-Mao Zhu, Zhihua Yu, Zhenjun Chen, Chenghe Wang, Yinan Wu, Hongqiu Wu, Rong-Jun Qin, Ruijin Ding, Fangsheng Huang• 2022

Related benchmarks

TaskDatasetResultRank
Continuous ControlMuJoCo Hopper H=10
Normalized Return13
10
Continuous ControlMuJoCo Hopper H=20
Normalized Return33.2
10
Continuous ControlMuJoCo Walker2d (H=10)
Normalized Return11.7
10
Continuous ControlMujoco--
7
Off-policy EvaluationDOPE averaged (three tasks)
Normalized Value Gap0.37
6
Continuous ControlMuJoCo Hopper (H=40)
Normalized Return53.5
5
Continuous ControlMuJoCo Walker2d (H=20)
Normalized Return29.9
5
Continuous ControlMuJoCo Walker2d (H=40)
Normalized Return61.2
5
Policy OptimizationMuJoCo Hopper (H=40)
Return53.5
5
Policy OptimizationMuJoCo Walker2d (H=20)
Return29.9
5
Showing 10 of 33 rows

Other info

Code

Follow for update