Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

A Player Selection Network for Scalable Game-Theoretic Prediction and Planning

About

While game-theoretic planning frameworks are effective at modeling multi-agent interactions, they require solving large optimization problems where the number of variables increases with the number of agents, resulting in long computation times that limit their use in large-scale, real-time systems. To address this issue, we propose 1) PSN Game-a learning-based, game-theoretic prediction and planning framework that reduces game size by learning a Player Selection Network (PSN); and 2) a Goal Inference Network (GIN) that makes it possible to use the PSN in incomplete-information games where other agents' intentions are unknown to the ego agent. A PSN outputs a player selection mask that distinguishes influential players from less relevant ones, enabling the ego player to solve a smaller, masked game involving only selected players. By reducing the number of players included in the game, PSN shrinks the corresponding optimization problems, leading to faster solve times. Experiments in both simulated scenarios and real-world pedestrian trajectory datasets show that PSN is competitive with, and often improves upon, the evaluated explicit game-theoretic selection baselines in 1) prediction accuracy and 2) planning safety. Across scenarios, PSN typically selects substantially fewer players than are present in the full game, thereby reducing game size and planning complexity. PSN also generalizes to settings in which agents' objectives are unknown, via the GIN, without test-time fine-tuning. By selecting only the most relevant players for decision-making, PSN Game provides a practical mechanism for reducing planning complexity that can be integrated into existing multi-agent planning frameworks.

Tianyu Qiu, Eric Ouano, Fernando Palafox, Christian Ellis, David Fridovich-Keil• 2025

Related benchmarks

TaskDatasetResultRank
Multi-agent planning4-agent scenarios inferred goals
Traj Start Success1.5
24
Multi-agent Trajectory Prediction4-agent scenarios inferred goals
ADE (m)0.1816
22
Multi-agent Trajectory Prediction10-agent scenarios inferred goals
ADE (m)0.2213
22
Multi-agent trajectory planning10 agent scenario (ground truth goals)
Trajectory Success Rate2.31
12
Multi-agent planning10-agent scenarios inferred goals
Trajectory Success Rate2.18
12
Trajectory Prediction20 Agent Scenarios
ADE0.3108
11
Trajectory PredictionCITR pedestrian dataset
ADE0.4931
11
Showing 7 of 7 rows

Other info

Follow for update