Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Temporal Difference Learning with Constrained Initial Representations

About

Recently, there have been numerous attempts to enhance the sample efficiency of off-policy reinforcement learning (RL) agents when interacting with the environment, including architecture improvements and new algorithms. Despite these advances, they overlook the potential of directly constraining the initial representations of the input data, which can intuitively alleviate the distribution shift issue and stabilize training. In this paper, we introduce the Tanh function into the initial layer to fulfill such a constraint. We theoretically unpack the convergence property of the temporal difference learning with the Tanh function under linear function approximation. Motivated by theoretical insights, we present our Constrained Initial Representations framework, tagged CIR, which is made up of three components: (i) the Tanh activation along with normalization methods to stabilize representations; (ii) the skip connection module to provide a linear pathway from the shallow layer to the deep layer; (iii) the convex Q-learning that allows a more flexible value estimate and mitigates potential conservatism. Empirical results show that CIR exhibits strong performance on numerous continuous control tasks, even being competitive or surpassing existing strong baseline methods.

Jiafei Lyu, Jingwen Yang, Zhongjian Qiao, Runze Liu, Zeyuan Liu, Deheng Ye, Zongqing Lu, Xiu Li• 2026

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningDeepMind Control Suite Easy & Medium
Acrobot Swingup443.5
7
LocomotionHumanoidBench 1.0 (test)
Balance Hard98.51
7
Reinforcement LearningDeepMind Control Suite (DMC) Hard Tasks (test)
Dog Run326.8
7
LocomotionHumanoidBench
Door Navigation Score327.2
2
Showing 4 of 4 rows

Other info

Follow for update