Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

UniLGL: Learning Uniform Place Recognition for FOV-limited/Panoramic LiDAR Global Localization

About

Existing LGL methods typically consider only partial information (e.g., geometric features) from LiDAR observations or are designed for homogeneous LiDAR sensors, overlooking the uniformity in LGL. In this work, a uniform LGL method is proposed, termed UniLGL, which simultaneously achieves spatial and material uniformity, as well as sensor-type uniformity. The key idea of the proposed method is to encode the complete point cloud, which contains both geometric and material information, into a pair of BEV images (i.e., a spatial BEV image and an intensity BEV image). An end-to-end multi-BEV fusion network is designed to extract uniform features, equipping UniLGL with spatial and material uniformity. To ensure robust LGL across heterogeneous LiDAR sensors, a viewpoint invariance hypothesis is introduced, which replaces the conventional translation equivariance assumption commonly used in existing LPR networks and supervises UniLGL to achieve sensor-type uniformity in both global descriptors and local feature representations. Finally, based on the mapping between local features on the 2D BEV image and the point cloud, a robust global pose estimator is derived that determines the global minimum of the global pose on SE(3) without requiring additional registration. To validate the effectiveness of the proposed uniform LGL, extensive benchmarks are conducted in real-world environments, and the results show that the proposed UniLGL is demonstratively competitive compared to other State-of-the-Art LGL methods. Furthermore, UniLGL has been deployed on diverse platforms, including full-size trucks and agile Micro Aerial Vehicles (MAVs), to enable high-precision localization and mapping as well as multi-MAV collaborative exploration in port and forest environments, demonstrating the applicability of UniLGL in industrial and field scenarios.

Hongming Shen, Xun Chen, Yulin Hui, Zhenyu Wu, Wei Wang, Qiyang Lyu, Tianchen Deng, Danwei Wang• 2025

Related benchmarks

TaskDatasetResultRank
LiDAR Place RecognitionGarden LT
Top-1 Recall98.4
50
Global LocalizationGarden LT
Success Rate (%)80.59
40
Global LocalizationMCD Panoramic
Success Rate100
32
Global LocalizationMCD (FoV-limited)
Success Rate91.15
32
Global LocalizationMid_NTU FoV-limited
LPR6.98
16
Global LocalizationOS_NTU Panoramic
LPR7.35
16
LiDAR Place RecognitionSnail (81R_03)
Recall@199.72
10
LiDAR Place RecognitionSnail Large-scale urban driving scenarios (Average)
Recall@199.2
10
LiDAR Place RecognitionMCD (FoV-limited)
Recall@198.11
10
LiDAR Place RecognitionMCD Panoramic
Recall@1100
10
Showing 10 of 18 rows

Other info

Follow for update