Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

AIM-SLAM: Dense Monocular SLAM via Adaptive and Informative Multi-View Keyframe Prioritization with Foundation Model

About

Recent advances in geometric foundation models have emerged as a promising alternative for addressing the challenge of dense reconstruction in monocular visual simultaneous localization and mapping (SLAM). Although geometric foundation models enable SLAM to leverage variable input views, the previous methods remain confined to two-view pairs or fixed-length inputs without sufficient deliberation of geometric context for view selection. To tackle this problem, we propose AIM-SLAM, a dense monocular SLAM framework that exploits an adaptive and informative multi-view keyframe prioritization with dense pointmap predictions from visual geometry grounded transformer (VGGT). Specifically, we introduce the selective information- and geometric-aware multi-view adaptation (SIGMA) module, which employs voxel overlap and information gain to retrieve a candidate set of keyframes and adaptively determine its size. Furthermore, we formulate a joint multi-view Sim(3) optimization that enforces consistent alignment across selected views, substantially improving pose estimation accuracy. The effectiveness of AIM-SLAM is demonstrated on real-world datasets, where it achieves state-of-the-art pose estimation performance and accurate dense reconstruction results. Our system supports ROS integration, with code is available at https://aimslam.github.io/.

Jinwoo Jeon, Dong-Uk Seo, Eungchang Mason Lee, Hyun Myung• 2026

Related benchmarks

TaskDatasetResultRank
Visual-Inertial OdometryEuRoC (All sequences)
MH1 Error0.055
62
Camera pose estimationTUM RGB-D 36
Error (desk)0.017
26
Dense ReconstructionTUM RGB-D
Completion Error0.026
9
Dense ReconstructionEuRoC Vicon Room sequences
Accuracy7.2
4
Showing 4 of 4 rows

Other info

Follow for update