Open Access
ARTICLE
Active Learning-Enhanced Deep Ensemble Framework for Human Activity Recognition Using Spatio-Textural Features
1 Department of Computer Science and Engineering, Annamalai University, Annamalainagar, 608002, Tamil Nadu, India
2 Department of Computer Science and Engineering, Gudlavalleru Engineering College, Gudlavalleru, 521356, Andhra Pradesh, India
* Corresponding Author: Lakshmi Alekhya Jandhyam. Email:
(This article belongs to the Special Issue: Machine Learning and Deep Learning-Based Pattern Recognition)
Computer Modeling in Engineering & Sciences 2025, 144(3), 3679-3714. https://doi.org/10.32604/cmes.2025.068941
Received 10 June 2025; Accepted 27 August 2025; Issue published 30 September 2025
Abstract
Human Activity Recognition (HAR) has become increasingly critical in civic surveillance, medical care monitoring, and institutional protection. Current deep learning-based approaches often suffer from excessive computational complexity, limited generalizability under varying conditions, and compromised real-time performance. To counter these, this paper introduces an Active Learning-aided Heuristic Deep Spatio-Textural Ensemble Learning (ALH-DSEL) framework. The model initially identifies keyframes from the surveillance videos with a Multi-Constraint Active Learning (MCAL) approach, with features extracted from DenseNet121. The frames are then segmented employing an optimized Fuzzy C-Means clustering algorithm with Firefly to identify areas of interest. A deep ensemble feature extractor, comprising DenseNet121, EfficientNet-B7, MobileNet, and GLCM, extracts varied spatial and textural features. Fused characteristics are enhanced through PCA and Min-Max normalization and discriminated by a maximum voting ensemble of RF, AdaBoost, and XGBoost. The experimental results show that ALH-DSEL provides higher accuracy, precision, recall, and F1-score, validating its superiority for real-time HAR in surveillance scenarios.Keywords
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools