Open Access iconOpen Access

ARTICLE

Deep Retraining Approach for Category-Specific 3D Reconstruction Models from a Single 2D Image

Nour El Houda Kaiber1, Tahar Mekhaznia1, Akram Bennour1,*, Mohammed Al-Sarem2,3,*, Zakaria Lakhdara4, Fahad Ghaban2, Mohammad Nassef5,6

1 LAMIS Laboratory, University of Larbi Tebessi, Tebessa, 12002, Algeria
2 College of Computer Science and Engineering, Taibah University, Medina, 41477, Saudi Arabia
3 Energy, Industry, and Advanced Technologies Research Center, Taibah University, Madinah, 41477, Saudi Arabia
4 Lire Laboratory, University of Constantine 2 Abdelhamid Mehri, Constantine, 25000, Algeria
5 Department of Computer Science, Faculty of Computers and Artificial Intelligence, Cairo University, Giza, 12613, Egypt
6 Department of Computer Science and Artificial Intelligence, College of Computer Science and Engineering, University of Jeddah, Jeddah, 23890, Saudi Arabia

* Corresponding Authors: Akram Bennour. Email: email; Mohammed Al-Sarem. Email: email

Computers, Materials & Continua 2026, 86(3), 41 https://doi.org/10.32604/cmc.2025.070337

Abstract

The generation of high-quality 3D models from single 2D images remains challenging in terms of accuracy and completeness. Deep learning has emerged as a promising solution, offering new avenues for improvements. However, building models from scratch is computationally expensive and requires large datasets. This paper presents a transfer-learning-based approach for category-specific 3D reconstruction from a single 2D image. The core idea is to fine-tune a pre-trained model on specific object categories using new, unseen data, resulting in specialized versions of the model that are better adapted to reconstruct particular objects. The proposed approach utilizes a three-phase pipeline comprising image acquisition, 3D reconstruction, and refinement. After ensuring the quality of the input image, a ResNet50 model is used for object recognition, directing the image to the corresponding category-specific model to generate a voxel-based representation. The voxel-based 3D model is then refined by transforming it into a detailed triangular mesh representation using the Marching Cubes algorithm and Laplacian smoothing. An experimental study, using the Pix2Vox model and the Pascal3D dataset, has been conducted to evaluate and validate the effectiveness of the proposed approach. Results demonstrate that category-specific fine-tuning of Pix2Vox significantly outperforms both the original model and the general model fine-tuned for all object categories, with substantial gains in Intersection over Union (IoU) scores. Visual assessments confirm improvements in geometric detail and surface realism. These findings indicate that combining transfer learning with category-specific fine tuning and refinement strategy of our approach leads to better-quality 3D model generation.

Keywords

3D reconstruction; computer vision; deep learning; transfer learning; object recognition; voxel representation; mesh refinement

Cite This Article

APA Style
Kaiber, N.E.H., Mekhaznia, T., Bennour, A., Al-Sarem, M., Lakhdara, Z. et al. (2026). Deep Retraining Approach for Category-Specific 3D Reconstruction Models from a Single 2D Image. Computers, Materials & Continua, 86(3), 41. https://doi.org/10.32604/cmc.2025.070337
Vancouver Style
Kaiber NEH, Mekhaznia T, Bennour A, Al-Sarem M, Lakhdara Z, Ghaban F, et al. Deep Retraining Approach for Category-Specific 3D Reconstruction Models from a Single 2D Image. Comput Mater Contin. 2026;86(3):41. https://doi.org/10.32604/cmc.2025.070337
IEEE Style
N. E. H. Kaiber et al., “Deep Retraining Approach for Category-Specific 3D Reconstruction Models from a Single 2D Image,” Comput. Mater. Contin., vol. 86, no. 3, pp. 41, 2026. https://doi.org/10.32604/cmc.2025.070337



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 312

    View

  • 84

    Download

  • 0

    Like

Share Link