Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

FAST-LIVO2: Fast, Direct LiDAR-Inertial-Visual Odometry

About

This paper proposes FAST-LIVO2: a fast, direct LiDAR-inertial-visual odometry framework to achieve accurate and robust state estimation in SLAM tasks and provide great potential in real-time, onboard robotic applications. FAST-LIVO2 fuses the IMU, LiDAR and image measurements efficiently through an ESIKF. To address the dimension mismatch between the heterogeneous LiDAR and image measurements, we use a sequential update strategy in the Kalman filter. To enhance the efficiency, we use direct methods for both the visual and LiDAR fusion, where the LiDAR module registers raw points without extracting edge or plane features and the visual module minimizes direct photometric errors without extracting ORB or FAST corner features. The fusion of both visual and LiDAR measurements is based on a single unified voxel map where the LiDAR module constructs the geometric structure for registering new LiDAR scans and the visual module attaches image patches to the LiDAR points. To enhance the accuracy of image alignment, we use plane priors from the LiDAR points in the voxel map (and even refine the plane prior) and update the reference patch dynamically after new images are aligned. Furthermore, to enhance the robustness of image alignment, FAST-LIVO2 employs an on-demanding raycast operation and estimates the image exposure time in real time. Lastly, we detail three applications of FAST-LIVO2: UAV onboard navigation demonstrating the system's computation efficiency for real-time onboard navigation, airborne mapping showcasing the system's mapping accuracy, and 3D model rendering (mesh-based and NeRF-based) underscoring the suitability of our reconstructed dense map for subsequent rendering tasks. We open source our code, dataset and application on GitHub to benefit the robotics community.

Chunran Zheng, Wei Xu, Zuhao Zou, Tong Hua, Chongjian Yuan, Dongjiao He, Bingyang Zhou, Zheng Liu, Jiarong Lin, Fangcheng Zhu, Yunfan Ren, Rong Wang, Fanle Meng, Fu Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Pose EstimationMCD
Max Error0.439
36
Pose EstimationMARS-LVIG
Max Error3.948
17
Pose EstimationM2UD
Max Error2.392
16
OdometryR3LIVE (campus02)
End-to-End Error0.01
11
OdometryR3LIVE (park0)
End-to-End Error0.04
11
Absolute Translation ErrorMARS-LVIG
ATE (AMtown01)2.57
11
OdometryR3LIVE (park1)
End-to-End Error0.54
11
Pose EstimationDiter++
Max Error0.505
11
OdometryR3LIVE Average
End-to-End Error2.25
11
Absolute Translation Errori2Nav-Robot
ATE Building 021.65
11
Showing 10 of 29 rows

Other info

Follow for update