Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RoboLLM: Robotic Vision Tasks Grounded on Multimodal Large Language Models

About

Robotic vision applications often necessitate a wide range of visual perception tasks, such as object detection, segmentation, and identification. While there have been substantial advances in these individual tasks, integrating specialized models into a unified vision pipeline presents significant engineering challenges and costs. Recently, Multimodal Large Language Models (MLLMs) have emerged as novel backbones for various downstream tasks. We argue that leveraging the pre-training capabilities of MLLMs enables the creation of a simplified framework, thus mitigating the need for task-specific encoders. Specifically, the large-scale pretrained knowledge in MLLMs allows for easier fine-tuning to downstream robotic vision tasks and yields superior performance. We introduce the RoboLLM framework, equipped with a BEiT-3 backbone, to address all visual perception tasks in the ARMBench challenge-a large-scale robotic manipulation dataset about real-world warehouse scenarios. RoboLLM not only outperforms existing baselines but also substantially reduces the engineering burden associated with model selection and tuning. The source code is publicly available at https://github.com/longkukuhi/armbench.

Zijun Long, George Killick, Richard McCreadie, Gerardo Aragon Camarasa• 2023

Related benchmarks

TaskDatasetResultRank
Object Instance SegmentationARMBench Mixed-Object Tote (test)
mAP5083
44
Object IdentificationARMBench (test)
Recall@198
10
Defect DetectionARMBench
Multi-Pick Precision84
3
Object Instance SegmentationARMBench Zoomed-Out Tote (test)
mAP5057
2
Object Instance SegmentationARMBench Same-Object Tote (test)
mAP5015
2
Showing 5 of 5 rows

Other info

Code

Follow for update