Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Rank-1 Approximation of Inverse Fisher for Natural Policy Gradients in Deep Reinforcement Learning

About

Natural gradients have long been studied in deep reinforcement learning due to their fast convergence properties and covariant weight updates. However, computing natural gradients requires inversion of the Fisher Information Matrix (FIM) at each iteration, which is computationally prohibitive in nature. In this paper, we present an efficient and scalable natural policy optimization technique that leverages a rank-1 approximation to full inverse-FIM. We theoretically show that under certain conditions, a rank-1 approximation to inverse-FIM converges faster than policy gradients and, under some conditions, enjoys the same sample complexity as stochastic policy gradient methods. We benchmark our method on a diverse set of environments and show that it achieves superior performance to standard actor-critic and trust-region baselines.

Yingxiao Huo, Satya Prakash Dash, Radu Stoican, Samuel Kaski, Mingfei Sun• 2026

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningWalker
Average Returns204.1
38
Reinforcement LearningHumanoid
Zero-Shot Reward4.63e+3
30
Reinforcement LearningHalfcheetah
Average Return2.94e+3
17
Reinforcement LearningHopper
Avg Episode Reward331.2
15
Reinforcement LearningPendulum
Avg Episode Reward-2.23e+3
15
Reinforcement Learningcartpole
Average Reward973.4
9
Reinforcement LearningSwimmer
Average Returns28.5
5
Reinforcement LearningPusher
Average Returns-408.2
5
Reinforcement LearningAcrobot
Average Returns-94.1
5
Showing 9 of 9 rows

Other info

Follow for update