Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Deep Learning-Powered Visual SLAM Aimed at Assisting Visually Impaired Navigation

About

Despite advancements in SLAM technologies, robust operation under challenging conditions such as low-texture, motion-blur, or challenging lighting remains an open challenge. Such conditions are common in applications such as assistive navigation for the visually impaired. These challenges undermine localization accuracy and tracking stability, reducing navigation reliability and safety. To overcome these limitations, we present SELM-SLAM3, a deep learning-enhanced visual SLAM framework that integrates SuperPoint and LightGlue for robust feature extraction and matching. We evaluated our framework using TUM RGB-D, ICL-NUIM, and TartanAir datasets, which feature diverse and challenging scenarios. SELM-SLAM3 outperforms conventional ORB-SLAM3 by an average of 87.84% and exceeds state-of-the-art RGB-D SLAM systems by 36.77%. Our framework demonstrates enhanced performance under challenging conditions, such as low-texture scenes and fast motion, providing a reliable platform for developing navigation aids for the visually impaired.

Marziyeh Bamdad, Hans-Peter Hutter, Alireza Darvishy• 2025

Related benchmarks

TaskDatasetResultRank
Visual SLAMTUM RGB-D fr1 desk
ATE RMSE (cm)1.9
24
Trajectory EstimationTUM RGB-D Freiburg1
RMSE0.019
17
Visual SLAMTUM RGB-D fr1/room
Translation RMSE (m)0.138
8
Visual SLAMTUM RGB-D fr1/desk2
ATE RMSE0.04
7
SLAMTartanAir Hospital-Hard sequences
ATE (m) P0370.049
4
SLAMTUM RGB-D Freiburg1 plant
ATE (m)0.034
3
Visual SLAMICL-NUIM (various sequences)
ATE (m) - lr-kt00.006
3
SLAMTUM RGB-D Freiburg1 rpy
ATE (m)0.021
2
SLAMTUM RGB-D Freiburg1 teddy
ATE (m)0.131
2
SLAMTUM RGB-D Freiburg1 floor
ATE (m)0.04
2
Showing 10 of 10 rows

Other info

Follow for update