Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Permutation Equivariant Model-based Offline Reinforcement Learning for Auto-bidding

About

Reinforcement learning (RL) for auto-bidding has shifted from using simplistic offline simulators (Simulation-based RL Bidding, SRLB) to offline RL on fixed real datasets (Offline RL Bidding, ORLB). However, ORLB policies are limited by the dataset's state space coverage, offering modest gains. While SRLB expands state coverage, its simulator-reality gap risks misleading policies. This paper introduces Model-based RL Bidding (MRLB), which learns an environment model from real data to bridge this gap. MRLB trains policies using both real and model-generated data, expanding state coverage beyond ORLB. To ensure model reliability, we propose: 1) A permutation equivariant model architecture for better generalization, and 2) A robust offline Q-learning method that pessimistically penalizes model errors. These form the Permutation Equivariant Model-based Offline RL (PE-MORL) algorithm. Real-world experiments show that PE-MORL outperforms state-of-the-art auto-bidding methods.

Zhiyu Mou, Miao Xu, Wei Chen, Rongquan Bai, Chuan Yu, Jian Xu• 2025

Related benchmarks

TaskDatasetResultRank
Auto-biddingSimulated Offline Advertising System 3.0k Budget 30 Advertisers
GMV553.9
9
Auto-biddingSimulated Offline Advertising System 1.5k budget 30 advertisers
GMV468.5
9
Auto-biddingSimulated Offline Advertising System 2.0k Budget 30 Advertisers
GMV488.1
9
Auto-biddingSimulated Offline Advertising System 2.5k Budget 30 Advertisers
GMV511.9
9
Showing 4 of 4 rows

Other info

Follow for update