Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DARES: Depth Anything in Robotic Endoscopic Surgery with Self-supervised Vector-LoRA of the Foundation Model

About

Robotic-assisted surgery (RAS) relies on accurate depth estimation for 3D reconstruction and visualization. While foundation models like Depth Anything Models (DAM) show promise, directly applying them to surgery often yields suboptimal results. Fully fine-tuning on limited surgical data can cause overfitting and catastrophic forgetting, compromising model robustness and generalization. Although Low-Rank Adaptation (LoRA) addresses some adaptation issues, its uniform parameter distribution neglects the inherent feature hierarchy, where earlier layers, learning more general features, require more parameters than later ones. To tackle this issue, we introduce Depth Anything in Robotic Endoscopic Surgery (DARES), a novel approach that employs a new adaptation technique, Vector Low-Rank Adaptation (Vector-LoRA) on the DAM V2 to perform self-supervised monocular depth estimation in RAS scenes. To enhance learning efficiency, we introduce Vector-LoRA by integrating more parameters in earlier layers and gradually decreasing parameters in later layers. We also design a reprojection loss based on the multi-scale SSIM error to enhance depth perception by better tailoring the foundation model to the specific requirements of the surgical environment. The proposed method is validated on the SCARED dataset and demonstrates superior performance over recent state-of-the-art self-supervised monocular depth estimation techniques, achieving an improvement of 13.3% in the absolute relative error metric. The code and pre-trained weights are available at https://github.com/mobarakol/DARES.

Mona Sheikh Zeinoddin, Chiara Lena, Jiongqi Qu, Luca Carlini, Mattia Magro, Seunghoi Kim, Elena De Momi, Sophia Bano, Matthew Grech-Sollars, Evangelos Mazomenos, Daniel C. Alexander, Danail Stoyanov, Matthew J. Clarkson, Mobarakol Islam• 2024

Related benchmarks

TaskDatasetResultRank
Monocular Depth EstimationC3VD (test)
Abs Rel0.134
16
Monocular Depth EstimationSimCol3D (test)
Abs Rel0.077
8
Monocular Depth EstimationCSD (test)
Abs Rel0.116
8
Ego-motion estimationSimCol3D SyntheticColon I sequence s2 (test)
ATE0.262
6
Ego-motion estimationSimCol3D sequence s1 SyntheticColon I (test)
ATE0.2678
6
Ego-motion estimationSimCol3D SyntheticColon I sequence s3 (test)
ATE0.1594
6
Camera pose estimationSimCol3D (10 random frames)
ATE0.0109
6
Showing 7 of 7 rows

Other info

Follow for update