Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sparse Masked Attention Policies for Reliable Generalization

About

In reinforcement learning, abstraction methods that remove unnecessary information from the observation are commonly used to learn policies which generalize better to unseen tasks. However, these methods often overlook a crucial weakness: the function which extracts the reduced-information representation has unknown generalization ability in unseen observations. In this paper, we address this problem by presenting an information removal method which more reliably generalizes to new states. We accomplish this by using a learned masking function which operates on, and is integrated with, the attention weights within an attention-based policy network. We demonstrate that our method significantly improves policy generalization to unseen tasks in the Procgen benchmark compared to standard PPO and masking approaches.

Caroline Horsch, Laurens Engwegen, Max Weltevrede, Matthijs T. J. Spaan, Wendelin B\"ohmer• 2026

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningProcgen (test)
BigFish Return21.61
21
Showing 1 of 1 rows

Other info

Follow for update