Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Act-Then-Measure: Reinforcement Learning for Partially Observable Environments with Active Measuring

About

We study Markov decision processes (MDPs), where agents have direct control over when and how they gather information, as formalized by action-contingent noiselessly observable MDPs (ACNO-MPDs). In these models, actions consist of two components: a control action that affects the environment, and a measurement action that affects what the agent can observe. To solve ACNO-MDPs, we introduce the act-then-measure (ATM) heuristic, which assumes that we can ignore future state uncertainty when choosing control actions. We show how following this heuristic may lead to shorter policy computation times and prove a bound on the performance loss incurred by the heuristic. To decide whether or not to take a measurement action, we introduce the concept of measuring value. We develop a reinforcement learning algorithm based on the ATM heuristic, using a Dyna-Q variant adapted for partially observable domains, and showcase its superior performance compared to prior methods on a number of partially-observable environments.

Merlijn Krale, Thiago D. Sim\~ao, Nils Jansen• 2023

Related benchmarks

TaskDatasetResultRank
Planning in ACNO-MDPsFrozen Lake 4x4 Default
Total Reward62.42
20
Planning in ACNO-MDPsFrozen Lake 4x4 Hard
Reward Value8.41
20
Planning in ACNO-MDPsFrozen Lake 8x8
Cumulative Reward3.29
20
Planning in ACNO-MDPsStochastic Taxi
Value0.908
12
Planning in ACNO-MDPsICU Sepsis
Performance Value0.74
12
Showing 5 of 5 rows

Other info

Follow for update