Open Access iconOpen Access

ARTICLE

crossmark

Research on Facial Expression Capture Based on Two-Stage Neural Network

Zhenzhou Wang1, Shao Cui1, Xiang Wang1,*, JiaFeng Tian2

1 School of Information Science and Engineering, Hebei University of Science and Technology, Shijiazhuang, 050000, China
2 School of Engineering, Newcastle University, Newcastle Upon Tyne, NE98, United Kingdom

* Corresponding Author: Xiang Wang. Email: email

Computers, Materials & Continua 2022, 72(3), 4709-4725. https://doi.org/10.32604/cmc.2022.027767

Abstract

To generate realistic three-dimensional animation of virtual character, capturing real facial expression is the primary task. Due to diverse facial expressions and complex background, facial landmarks recognized by existing strategies have the problem of deviations and low accuracy. Therefore, a method for facial expression capture based on two-stage neural network is proposed in this paper which takes advantage of improved multi-task cascaded convolutional networks (MTCNN) and high-resolution network. Firstly, the convolution operation of traditional MTCNN is improved. The face information in the input image is quickly filtered by feature fusion in the first stage and Octave Convolution instead of the original ones is introduced into in the second stage to enhance the feature extraction ability of the network, which further rejects a large number of false candidates. The model outputs more accurate facial candidate windows for better landmarks recognition and locates the faces. Then the images cropped after face detection are input into high-resolution network. Multi-scale feature fusion is realized by parallel connection of multi-resolution streams, and rich high-resolution heatmaps of facial landmarks are obtained. Finally, the changes of facial landmarks recognized are tracked in real-time. The expression parameters are extracted and transmitted to Unity3D engine to drive the virtual character's face, which can realize facial expression synchronous animation. Extensive experimental results obtained on the WFLW database demonstrate the superiority of the proposed method in terms of accuracy and robustness, especially for diverse expressions and complex background. The method can accurately capture facial expression and generate three-dimensional animation effects, making online entertainment and social interaction more immersive in shared virtual space.

Keywords


Cite This Article

APA Style
Wang, Z., Cui, S., Wang, X., Tian, J. (2022). Research on facial expression capture based on two-stage neural network. Computers, Materials & Continua, 72(3), 4709-4725. https://doi.org/10.32604/cmc.2022.027767
Vancouver Style
Wang Z, Cui S, Wang X, Tian J. Research on facial expression capture based on two-stage neural network. Comput Mater Contin. 2022;72(3):4709-4725 https://doi.org/10.32604/cmc.2022.027767
IEEE Style
Z. Wang, S. Cui, X. Wang, and J. Tian "Research on Facial Expression Capture Based on Two-Stage Neural Network," Comput. Mater. Contin., vol. 72, no. 3, pp. 4709-4725. 2022. https://doi.org/10.32604/cmc.2022.027767



cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1207

    View

  • 682

    Download

  • 0

    Like

Share Link