Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fully Convolutional Grasp Detection Network with Oriented Anchor Box

About

In this paper, we present a real-time approach to predict multiple grasping poses for a parallel-plate robotic gripper using RGB images. A model with oriented anchor box mechanism is proposed and a new matching strategy is used during the training process. An end-to-end fully convolutional neural network is employed in our work. The network consists of two parts: the feature extractor and multi-grasp predictor. The feature extractor is a deep convolutional neural network. The multi-grasp predictor regresses grasp rectangles from predefined oriented rectangles, called oriented anchor boxes, and classifies the rectangles into graspable and ungraspable. On the standard Cornell Grasp Dataset, our model achieves an accuracy of 97.74% and 96.61% on image-wise split and object-wise split respectively, and outperforms the latest state-of-the-art approach by 1.74% on image-wise split and 0.51% on object-wise split.

Xinwen Zhou, Xuguang Lan, Hanbo Zhang, Zhiqiang Tian, Yang Zhang, Nanning Zheng• 2018

Related benchmarks

TaskDatasetResultRank
Grasp DetectionCornell Dataset (object-wise)
Accuracy96.6
39
Grasp DetectionCornell Dataset image-wise
Accuracy97.7
25
Grasp DetectionCornell image-wise
Accuracy97.7
24
Grasp DetectionJacquard Dataset
Accuracy92.8
16
Showing 4 of 4 rows

Other info

Follow for update