Home / Journals / CMC / Online First / doi:10.32604/cmc.2025.071574
Special Issues
Table of Content

Open Access

ARTICLE

MFCCT: A Robust Spectral-Temporal Fusion Method with DeepConvLSTM for Human Activity Recognition

Rashid Jahangir1,*, Nazik Alturki2, Muhammad Asif Nauman3, Faiqa Hanif1
1 Department of Computer Science, COMSATS University Islamabad, Vehari Campus, Vehari, 61100, Pakistan
2 Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O.Box 84428, Riyadh, 11671, Saudi Arabia
3 Riphah School of Computing & Innovation, Riphah International University, Lahore, 54000, Pakistan
* Corresponding Author: Rashid Jahangir. Email: email

Computers, Materials & Continua https://doi.org/10.32604/cmc.2025.071574

Received 07 August 2025; Accepted 13 October 2025; Published online 21 November 2025

Abstract

Human activity recognition (HAR) is a method to predict human activities from sensor signals using machine learning (ML) techniques. HAR systems have several applications in various domains, including medicine, surveillance, behavioral monitoring, and posture analysis. Extraction of suitable information from sensor data is an important part of the HAR process to recognize activities accurately. Several research studies on HAR have utilized Mel frequency cepstral coefficients (MFCCs) because of their effectiveness in capturing the periodic pattern of sensor signals. However, existing MFCC-based approaches often fail to capture sufficient temporal variability, which limits their ability to distinguish between complex or imbalanced activity classes robustly. To address this gap, this study proposes a feature fusion strategy that merges time-based and MFCC features (MFCCT) to enhance activity representation. The merged features were fed to a convolutional neural network (CNN) integrated with long short-term memory (LSTM)—DeepConvLSTM to construct the HAR model. The MFCCT features with DeepConvLSTM achieved better performance as compared to MFCCs and time-based features on PAMAP2, UCI-HAR, and WISDM by obtaining an accuracy of 97%, 98%, and 97%, respectively. In addition, DeepConvLSTM outperformed the deep learning (DL) algorithms that have recently been employed in HAR. These results confirm that the proposed hybrid features are not only practical but also generalizable, making them applicable across diverse HAR datasets for accurate activity classification.

Keywords

DeepConvLSTM; human activity recognition (HAR); MFCCT; feature fusion; wearable sensors
  • 115

    View

  • 21

    Download

  • 0

    Like

Share Link