Open AccessOpen Access


HGG-CNN: The Generation of the Optimal Robotic Grasp Pose Based on Vision

Shiyin Qiu1,*, David Lodder2, Feifan Du2

1 Shanghai Maritime University, Shanghai, China
2 HZ University of Applied Sciences, Vlissingen, Zeeland, Netherlands

* Corresponding Author: Shiyin Qiu. Email:

Intelligent Automation & Soft Computing 2020, 26(6), 1517-1529.


Robotic grasping is an important issue in the field of robot control. In order to solve the problem of optimal grasping pose of the robotic arm, based on the Generative Grasping Convolutional Neural Network (GG-CNN), a new convolutional neural network called Hybrid Generative Grasping Convolutional Neural Network (HGG-CNN) is proposed by combining three small network structures called Inception Block, Dense Block and SELayer. This new type of convolutional neural network structure can improve the accuracy rate of grasping pose based on the GG-CNN network, thereby improving the success rate of grasping. In addition, the HGG-CNN convolutional neural network structure can also overcome the problem that the original GG-CNN network structure has in yielding a recognition rate of less than 70% for complex artificial irregular objects. After experimental tests, the HGG-CNN convolutional neural network can improve the average grasping pose accuracy of the original GG-CNN network from 83.83% to 92.48%. For irregular objects with complex man-made shapes such as spoons, the recognition rate of grasping pose can also be increased from 21.38% to 55.33%.


Cite This Article

S. Qiu, D. Lodder and F. Du, "Hgg-cnn: the generation of the optimal robotic grasp pose based on vision," Intelligent Automation & Soft Computing, vol. 26, no.6, pp. 1517–1529, 2020.


This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1495


  • 783


  • 1


Share Link

WeChat scan