Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LabelAny3D: Label Any Object 3D in the Wild

About

Detecting objects in 3D space from monocular input is crucial for applications ranging from robotics to scene understanding. Despite advanced performance in the indoor and autonomous driving domains, existing monocular 3D detection models struggle with in-the-wild images due to the lack of 3D in-the-wild datasets and the challenges of 3D annotation. We introduce LabelAny3D, an \emph{analysis-by-synthesis} framework that reconstructs holistic 3D scenes from 2D images to efficiently produce high-quality 3D bounding box annotations. Built on this pipeline, we present COCO3D, a new benchmark for open-vocabulary monocular 3D detection, derived from the MS-COCO dataset and covering a wide range of object categories absent from existing 3D datasets. Experiments show that annotations generated by LabelAny3D improve monocular 3D detection performance across multiple benchmarks, outperforming prior auto-labeling approaches in quality. These results demonstrate the promise of foundation-model-driven annotation for scaling up 3D recognition in realistic, open-world settings.

Jin Yao, Radowan Mahmud Redoy, Sebastian Elbaum, Matthew B. Dwyer, Zezhou Cheng• 2026

Related benchmarks

TaskDatasetResultRank
Monocular 3D DetectionCOCO3D
AP3D0.1092
5
Monocular 3D DetectionOmni3D Novel category split
AP3D16.98
5
Monocular 3D DetectionOmni3D Base category split
AP3D24.77
5
Showing 3 of 3 rows

Other info

Follow for update