Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Reinforcement co-Learning of Deep and Spiking Neural Networks for Energy-Efficient Mapless Navigation with Neuromorphic Hardware

About

Energy-efficient mapless navigation is crucial for mobile robots as they explore unknown environments with limited on-board resources. Although the recent deep reinforcement learning (DRL) approaches have been successfully applied to navigation, their high energy consumption limits their use in several robotic applications. Here, we propose a neuromorphic approach that combines the energy-efficiency of spiking neural networks with the optimality of DRL and benchmark it in learning control policies for mapless navigation. Our hybrid framework, spiking deep deterministic policy gradient (SDDPG), consists of a spiking actor network (SAN) and a deep critic network, where the two networks were trained jointly using gradient descent. The co-learning enabled synergistic information exchange between the two networks, allowing them to overcome each other's limitations through a shared representation learning. To evaluate our approach, we deployed the trained SAN on Intel's Loihi neuromorphic processor. When validated on simulated and real-world complex environments, our method on Loihi consumed 75 times less energy per inference as compared to DDPG on Jetson TX2, and also exhibited a higher rate of successful navigation to the goal, which ranged from 1% to 4.2% and depended on the forward-propagation timestep size. These results reinforce our ongoing efforts to design brain-inspired algorithms for controlling autonomous robots with neuromorphic hardware.

Guangzhi Tang, Neelesh Kumar, Konstantinos P. Michmizos• 2020

Related benchmarks

TaskDatasetResultRank
Robot Obstacle AvoidanceRobot Obstacle Avoidance Disturbed Laser 6.0 (test)
Success Rate71
10
Robot Obstacle AvoidanceRobot Obstacle Avoidance Disturbed Laser 0.2 (test)
SR78.5
10
Robot Obstacle AvoidanceRobot Obstacle Avoidance GN (test)
SR71.5
10
Robot Obstacle AvoidanceRobot Obstacle Avoidance 8-bit Loihi weight (test)
Success Rate78.5
10
Robot Obstacle AvoidanceRobot Obstacle Avoidance 30% Zero weight, 5 rounds (test)
Success Rate59.3
10
Robot Obstacle AvoidanceRobot Obstacle Avoidance GN weight, 5 rounds (test)
Success Rate51.3
10
Capture The Flag1v1 CtF v1 (test)
EpI (J/inf)0.468
5
Capture The Flag2v2 CtF v1 (test)
EpI (J/inf)0.562
5
ParkingParking v1 (test)
Energy per Inference (J)0.32
5
Showing 9 of 9 rows

Other info

Follow for update