Open Access iconOpen Access

ARTICLE

Dense Spatial-Temporal Graph Convolutional Network Based on Lightweight OpenPose for Detecting Falls

Xiaorui Zhang1,2,3,*, Qijian Xie1, Wei Sun3,4, Yongjun Ren1,2,3, Mithun Mukherjee5

1 School of Computer Science, Nanjing University of Information Science & Technology, Nanjing, 210044, China
2 Wuxi Research Institute, Nanjing University of Information Science & Technology, Wuxi, 214100, China
3 Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET), Nanjing University of Information Science & Technology, Nanjing, 210044, China
4 School of Automation, Nanjing University of Information Science & Technology, Nanjing, 210044, China
5 School of Artificial Intelligence, Nanjing University of Information Science & Technology, Nanjing, 210044, China

* Corresponding Author: Xiaorui Zhang. Email: email

Computers, Materials & Continua 2023, 77(1), 47-61. https://doi.org/10.32604/cmc.2023.042561

Abstract

Fall behavior is closely related to high mortality in the elderly, so fall detection becomes an important and urgent research area. However, the existing fall detection methods are difficult to be applied in daily life due to a large amount of calculation and poor detection accuracy. To solve the above problems, this paper proposes a dense spatial-temporal graph convolutional network based on lightweight OpenPose. Lightweight OpenPose uses MobileNet as a feature extraction network, and the prediction layer uses bottleneck-asymmetric structure, thus reducing the amount of the network. The bottleneck-asymmetrical structure compresses the number of input channels of feature maps by 1 × 1 convolution and replaces the 7 × 7 convolution structure with the asymmetric structure of 1 × 7 convolution, 7 × 1 convolution, and 7 × 7 convolution in parallel. The spatial-temporal graph convolutional network divides the multi-layer convolution into dense blocks, and the convolutional layers in each dense block are connected, thus improving the feature transitivity, enhancing the network’s ability to extract features, thus improving the detection accuracy. Two representative datasets, Multiple Cameras Fall dataset (MCF), and Nanyang Technological University Red Green Blue + Depth Action Recognition dataset (NTU RGB + D), are selected for our experiments, among which NTU RGB + D has two evaluation benchmarks. The results show that the proposed model is superior to the current fall detection models. The accuracy of this network on the MCF dataset is 96.3%, and the accuracies on the two evaluation benchmarks of the NTU RGB + D dataset are 85.6% and 93.5%, respectively.

Keywords


Cite This Article

APA Style
Zhang, X., Xie, Q., Sun, W., Ren, Y., Mukherjee, M. (2023). Dense spatial-temporal graph convolutional network based on lightweight openpose for detecting falls. Computers, Materials & Continua, 77(1), 47-61. https://doi.org/10.32604/cmc.2023.042561
Vancouver Style
Zhang X, Xie Q, Sun W, Ren Y, Mukherjee M. Dense spatial-temporal graph convolutional network based on lightweight openpose for detecting falls. Comput Mater Contin. 2023;77(1):47-61 https://doi.org/10.32604/cmc.2023.042561
IEEE Style
X. Zhang, Q. Xie, W. Sun, Y. Ren, and M. Mukherjee "Dense Spatial-Temporal Graph Convolutional Network Based on Lightweight OpenPose for Detecting Falls," Comput. Mater. Contin., vol. 77, no. 1, pp. 47-61. 2023. https://doi.org/10.32604/cmc.2023.042561



cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 639

    View

  • 300

    Download

  • 0

    Like

Share Link