Open Access iconOpen Access

ARTICLE

crossmark

Optimizing Deep Neural Networks for Face Recognition to Increase Training Speed and Improve Model Accuracy

Mostafa Diba*, Hossein Khosravi

Faculty of Electrical Engineering, Shahrood University of Technology, Shahrood, Semnan, P.O. Box 3619995161, Iran

* Corresponding Author: Mostafa Diba. Email: email

(This article belongs to the Special Issue: Deep Learning, IoT, and Blockchain in Medical Data Processing )

Intelligent Automation & Soft Computing 2023, 38(3), 315-332. https://doi.org/10.32604/iasc.2023.046590

Abstract

Convolutional neural networks continually evolve to enhance accuracy in addressing various problems, leading to an increase in computational cost and model size. This paper introduces a novel approach for pruning face recognition models based on convolutional neural networks. The proposed method identifies and removes inefficient filters based on the information volume in feature maps. In each layer, some feature maps lack useful information, and there exists a correlation between certain feature maps. Filters associated with these two types of feature maps impose additional computational costs on the model. By eliminating filters related to these categories of feature maps, the reduction of both computational cost and model size can be achieved. The approach employs a combination of correlation analysis and the summation of matrix elements within each feature map to detect and eliminate inefficient filters. The method was applied to two face recognition models utilizing the VGG16 and ResNet50V2 backbone architectures. In the proposed approach, the number of filters removed in each layer varies, and the removal process is independent of the adjacent layers. The convolutional layers of both backbone models were initialized with pre-trained weights from ImageNet. For training, the CASIA-WebFace dataset was utilized, and the Labeled Faces in the Wild (LFW) dataset was employed for benchmarking purposes. In the VGG16-based face recognition model, a 0.74% accuracy improvement was achieved while reducing the number of convolution parameters by 26.85% and decreasing Floating-point operations per second (FLOPs) by 47.96%. For the face recognition model based on the ResNet50V2 architecture, the ArcFace method was implemented. The removal of inactive filters in this model led to a slight decrease in accuracy by 0.11%. However, it resulted in enhanced training speed, a reduction of 59.38% in convolution parameters, and a 57.29% decrease in FLOPs.

Keywords


Cite This Article

APA Style
Diba, M., Khosravi, H. (2023). Optimizing deep neural networks for face recognition to increase training speed and improve model accuracy. Intelligent Automation & Soft Computing, 38(3), 315-332. https://doi.org/10.32604/iasc.2023.046590
Vancouver Style
Diba M, Khosravi H. Optimizing deep neural networks for face recognition to increase training speed and improve model accuracy. Intell Automat Soft Comput . 2023;38(3):315-332 https://doi.org/10.32604/iasc.2023.046590
IEEE Style
M. Diba and H. Khosravi, "Optimizing Deep Neural Networks for Face Recognition to Increase Training Speed and Improve Model Accuracy," Intell. Automat. Soft Comput. , vol. 38, no. 3, pp. 315-332. 2023. https://doi.org/10.32604/iasc.2023.046590



cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 351

    View

  • 133

    Download

  • 0

    Like

Share Link