Vol.27, No.2, 2021, pp.425-439, doi:10.32604/iasc.2021.013795
OPEN ACCESS
ARTICLE
Generation of Synthetic Images of Randomly Stacked Object Scenes for Network Training Applications
  • Yajun Zhang1,*, Jianjun Yi1, Jiahao Zhang1, Yuanhao Chen1, Liang He2
1 East China University of Science and Technology, Shanghai, 200237, China
2 Shanghai Aerospace Control Technology Institute, Shanghai, 201109, China
* Corresponding Author: Yajun Zhang. Email:
Received 30 August 2020; Accepted 15 November 2020; Issue published 18 January 2021
Abstract
Image recognition algorithms based on deep learning have been widely developed in recent years owing to their capability of automatically capturing recognition features from image datasets and constantly improving the accuracy and efficiency of the image recognition process. However, the task of training deep learning networks is time-consuming and expensive because large training datasets are generally required, and extensive manpower is needed to annotate each of the images in the training dataset to support the supervised learning process. This task is particularly arduous when the image scenes involve randomly stacked objects. The present work addresses this issue by developing a synthetic training dataset generation method based on OpenGL and the Bullet physics engine which can automatically generate annotated synthetic datasets by simulating the freefall of a collection of objects under the force of gravity. Rigorous statistical comparison of a real image dataset of staked scenes with a synthetic image dataset generated by the proposed approach demonstrates that the two datasets exhibit no significant differences. Moreover, the object detection performances obtained by three popular network architectures trained using the synthetic dataset generated by the proposed approach are demonstrated to be much better than the results of training conducted using a synthetic dataset generated by a conventional cut and paste approach, and these performances are also competitive with the results of training conducted using a dataset composed of real images.
Keywords
Synthetic dataset; stacked object scenes; OpenGL; Bullet physics engine; image recognition; parts position
Cite This Article
Y. Zhang, J. Yi, J. Zhang, Y. Chen and L. He, "Generation of synthetic images of randomly stacked object scenes for network training applications," Intelligent Automation & Soft Computing, vol. 27, no.2, pp. 425–439, 2021.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.