Open Access iconOpen Access

ARTICLE

crossmark

BCCLR: A Skeleton-Based Action Recognition with Graph Convolutional Network Combining Behavior Dependence and Context Clues

Yunhe Wang1, Yuxin Xia2, Shuai Liu2,*

1 College of Information Science and Engineering, Institute of Interdisciplinary Studies, Hunan Normal University, Changsha, 410081, China
2 School of Educational Sciences, Institute of Interdisciplinary Studies, Hunan Normal University, Changsha, 410081, China

* Corresponding Author: Shuai Liu. Email: email

(This article belongs to the Special Issue: Multimodal Learning in Image Processing)

Computers, Materials & Continua 2024, 78(3), 4489-4507. https://doi.org/10.32604/cmc.2024.048813

Abstract

In recent years, skeleton-based action recognition has made great achievements in Computer Vision. A graph convolutional network (GCN) is effective for action recognition, modelling the human skeleton as a spatio-temporal graph. Most GCNs define the graph topology by physical relations of the human joints. However, this predefined graph ignores the spatial relationship between non-adjacent joint pairs in special actions and the behavior dependence between joint pairs, resulting in a low recognition rate for specific actions with implicit correlation between joint pairs. In addition, existing methods ignore the trend correlation between adjacent frames within an action and context clues, leading to erroneous action recognition with similar poses. Therefore, this study proposes a learnable GCN based on behavior dependence, which considers implicit joint correlation by constructing a dynamic learnable graph with extraction of specific behavior dependence of joint pairs. By using the weight relationship between the joint pairs, an adaptive model is constructed. It also designs a self-attention module to obtain their inter-frame topological relationship for exploring the context of actions. Combining the shared topology and the multi-head self-attention map, the module obtains the context-based clue topology to update the dynamic graph convolution, achieving accurate recognition of different actions with similar poses. Detailed experiments on public datasets demonstrate that the proposed method achieves better results and realizes higher quality representation of actions under various evaluation protocols compared to state-of-the-art methods.

Keywords


Cite This Article

APA Style
Wang, Y., Xia, Y., Liu, S. (2024). BCCLR: A skeleton-based action recognition with graph convolutional network combining behavior dependence and context clues. Computers, Materials & Continua, 78(3), 4489-4507. https://doi.org/10.32604/cmc.2024.048813
Vancouver Style
Wang Y, Xia Y, Liu S. BCCLR: A skeleton-based action recognition with graph convolutional network combining behavior dependence and context clues. Comput Mater Contin. 2024;78(3):4489-4507 https://doi.org/10.32604/cmc.2024.048813
IEEE Style
Y. Wang, Y. Xia, and S. Liu, “BCCLR: A Skeleton-Based Action Recognition with Graph Convolutional Network Combining Behavior Dependence and Context Clues,” Comput. Mater. Contin., vol. 78, no. 3, pp. 4489-4507, 2024. https://doi.org/10.32604/cmc.2024.048813



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 549

    View

  • 238

    Download

  • 0

    Like

Share Link