Open Access iconOpen Access

ARTICLE

crossmark

Enhancing Human Action Recognition with Adaptive Hybrid Deep Attentive Networks and Archerfish Optimization

Ahmad Yahiya Ahmad Bani Ahmad1, Jafar Alzubi2, Sophers James3, Vincent Omollo Nyangaresi4,5,*, Chanthirasekaran Kutralakani6, Anguraju Krishnan7

1 Department of Accounting and Finance, Faculty of Business, Middle East University, Amman, 11831, Jordan
2 Faculty of Engineering, Al-Balqa Applied University, Salt, 19117, Jordan
3 Department of Mathematics, Kongunadu College of Engineering and Technology (Autonomous), Tholurpatti, Trichy, 621215, India
4 Department of Computer Science and Software Engineering, Jaramogi Oginga Odinga University of Science and Technology, Bondo, 210-40601, Kenya
5 Department of Electronics and Communication Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, 602105, India
6 Department of Electronics and Communication Engineering, Saveetha Engineering College (Autonomous), Chennai, 602105, India
7 Department of Computer Science and Engineering, Kongunadu College of Engineering and Technology (Autonomous), Tholurpatti, Trichy, 621215, India

* Corresponding Author: Vincent Omollo Nyangaresi. Email: email

Computers, Materials & Continua 2024, 80(3), 4791-4812. https://doi.org/10.32604/cmc.2024.052771

Abstract

In recent years, wearable devices-based Human Activity Recognition (HAR) models have received significant attention. Previously developed HAR models use hand-crafted features to recognize human activities, leading to the extraction of basic features. The images captured by wearable sensors contain advanced features, allowing them to be analyzed by deep learning algorithms to enhance the detection and recognition of human actions. Poor lighting and limited sensor capabilities can impact data quality, making the recognition of human actions a challenging task. The unimodal-based HAR approaches are not suitable in a real-time environment. Therefore, an updated HAR model is developed using multiple types of data and an advanced deep-learning approach. Firstly, the required signals and sensor data are accumulated from the standard databases. From these signals, the wave features are retrieved. Then the extracted wave features and sensor data are given as the input to recognize the human activity. An Adaptive Hybrid Deep Attentive Network (AHDAN) is developed by incorporating a “1D Convolutional Neural Network (1DCNN)” with a “Gated Recurrent Unit (GRU)” for the human activity recognition process. Additionally, the Enhanced Archerfish Hunting Optimizer (EAHO) is suggested to fine-tune the network parameters for enhancing the recognition process. An experimental evaluation is performed on various deep learning networks and heuristic algorithms to confirm the effectiveness of the proposed HAR model. The EAHO-based HAR model outperforms traditional deep learning networks with an accuracy of 95.36, 95.25 for recall, 95.48 for specificity, and 95.47 for precision, respectively. The result proved that the developed model is effective in recognizing human action by taking less time. Additionally, it reduces the computation complexity and overfitting issue through using an optimization approach.

Keywords


Cite This Article

APA Style
Ahmad, A.Y.A.B., Alzubi, J., James, S., Nyangaresi, V.O., Kutralakani, C. et al. (2024). Enhancing human action recognition with adaptive hybrid deep attentive networks and archerfish optimization. Computers, Materials & Continua, 80(3), 4791-4812. https://doi.org/10.32604/cmc.2024.052771
Vancouver Style
Ahmad AYAB, Alzubi J, James S, Nyangaresi VO, Kutralakani C, Krishnan A. Enhancing human action recognition with adaptive hybrid deep attentive networks and archerfish optimization. Comput Mater Contin. 2024;80(3):4791-4812 https://doi.org/10.32604/cmc.2024.052771
IEEE Style
A.Y.A.B. Ahmad, J. Alzubi, S. James, V.O. Nyangaresi, C. Kutralakani, and A. Krishnan "Enhancing Human Action Recognition with Adaptive Hybrid Deep Attentive Networks and Archerfish Optimization," Comput. Mater. Contin., vol. 80, no. 3, pp. 4791-4812. 2024. https://doi.org/10.32604/cmc.2024.052771



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 351

    View

  • 134

    Download

  • 0

    Like

Share Link