Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning

About

Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.

Stephane Ross, Geoffrey J. Gordon, J. Andrew Bagnell• 2010

Related benchmarks

TaskDatasetResultRank
Topic ClassificationAG-News
Accuracy93.4
225
Sentiment AnalysisMR
Accuracy0.921
160
Semantic Textual SimilaritySTS-B
Spearman's Rho (x100)90.12
136
Text ClassificationAGNews
Accuracy92.2
119
Paraphrase DetectionMRPC
Avg Accuracy83.58
89
Natural Language InferenceMNLI
Accuracy (matched)84.7
80
Paraphrase IdentificationQQP
Accuracy85.9
78
Offline Reinforcement LearningKitchen Partial
Normalized Score41.3
62
Object Goal NavigationHM3D-OVON Seen (val)
SR11.1
55
Offline Reinforcement LearningD4RL antmaze-umaze (diverse)
Normalized Score63.4
47
Showing 10 of 54 rows

Other info

Follow for update