iconOpen Access

ARTICLE

crossmark

Feature Fusion Based Deep Transfer Learning Based Human Gait Classification Model

C. S. S. Anupama1, Rafina Zakieva2, Afanasiy Sergin3, E. Laxmi Lydia4, Seifedine Kadry5,6,7, Chomyong Kim8, Yunyoung Nam8,*

1 Department of Electronics and Instrumentation Engineering, V. R. Siddhartha Engineering College, Vijayawada, 520007, India
2 Candidate of Pedagogical Sciences, Department of Industrial Electronics and Lighting Engineering, Kazan State Power Engineering University, Kazan, 420066, Russia
3 Department of Theories and Principles of Physical Education and Life Safety, North-Eastern Federal University Named After M. K. Ammosov, Yakutsk, 677000, Russia
4 Department of Computer Science and Engineering, Vignan’s Institute of Information Technology, Visakhapatnam, 530049, India
5 Department of Applied Data Science, Noroff University College, Kristiansand, Norway
6 Artificial Intelligence Research Center (AIRC), Ajman University, Ajman, 346, United Arab Emirates
7 Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
8 Department of ICT Convergence, Soonchunhyang University, Asan, 31538, Korea

* Corresponding Author: Yunyoung Nam. Email: email

Intelligent Automation & Soft Computing 2023, 37(2), 1453-1468. https://doi.org/10.32604/iasc.2023.038321

Abstract

Gait is a biological typical that defines the method by that people walk. Walking is the most significant performance which keeps our day-to-day life and physical condition. Surface electromyography (sEMG) is a weak bioelectric signal that portrays the functional state between the human muscles and nervous system to any extent. Gait classifiers dependent upon sEMG signals are extremely utilized in analysing muscle diseases and as a guide path for recovery treatment. Several approaches are established in the works for gait recognition utilizing conventional and deep learning (DL) approaches. This study designs an Enhanced Artificial Algae Algorithm with Hybrid Deep Learning based Human Gait Classification (EAAA-HDLGR) technique on sEMG signals. The EAAA-HDLGR technique extracts the time domain (TD) and frequency domain (FD) features from the sEMG signals and is fused. In addition, the EAAA-HDLGR technique exploits the hybrid deep learning (HDL) model for gait recognition. At last, an EAAA-based hyperparameter optimizer is applied for the HDL model, which is mainly derived from the quasi-oppositional based learning (QOBL) concept, showing the novelty of the work. A brief classifier outcome of the EAAA-HDLGR technique is examined under diverse aspects, and the results indicate improving the EAAA-HDLGR technique. The results imply that the EAAA-HDLGR technique accomplishes improved results with the inclusion of EAAA on gait recognition.

Keywords


1  Introduction

Human gait, how an individual walks, is personally distinctive because of its physical properties difference betwixt individuals and might be employed as a novel biometric for the authentication and identification of a person [1]. In comparison to other biometrics, namely iris, face, and fingerprint, human gait recognition (HGR) has the advantage of non-invasion, non-cooperation (without any cooperation or interaction with the subject), hard to disguise and long distance, making it very attractive as a means of detection and demonstrates huge potential in the applications of surveillance and security [2,3]. Many sensing modalities involving wearable devices, vision, and foot pressure were used for capturing gait data [4].

Traditional HGR techniques include data preprocessing and features extracted in a handcrafted manner for additional identification [5], frequently suffer from various challenges and constraints enforced by the difficulty of the tasks, namely occlusions, viewing angle, locating the body segments, shadows, large intra-class variations, etc. [6,7]. Emerging trends in machine learning (ML), called deep learning (DL), have become apparent in the past few years as a ground-breaking tool for handling topics in computer vision, speech, sound, and image processing, tremendously outperforming virtually any baseline established previously [8]. The new model exempts the requirement of manually extracting representative features from the expert and provides primary outcomes based on HGR, surpassing the present difficulties and opening room for additional investigation [9,10]. The manually extracted feature was affected by the smartphone’s position while gathering the information [11]. Consequently, some typical statistical manually extracted feature was collected from raw smartphone sensor information. Afterwards, the extraction of handcrafted features, the shallow ML classifier, was used to identify many physical activities of the human. Therefore, shallow ML algorithm relies on handcrafted feature [12,13]. The DL algorithm is more advanced than the shallow ML algorithm since the DL algorithm automatically learns useful features from the raw sensory information without human intervention and identifies the human’s physical activities [14]. The shallow ML algorithm with DL algorithms and handcrafted extracting features with automatically learned features achieved better outcomes in carrying out the smartphone-based HGR model. So, it is apparent that integrating manually extracted features with automatically learned features in the DL algorithm might enhance the potential of the smartphone-based HGR paradigm [15].

Khan et al. [16] developed a lightweight DL algorithm for the HGR method. The presented algorithm involves sequential step–pretrained deep model selection of feature classification. Initially, two lightweight pretrained methods are considered and finetuned concerning other layers and freeze some middle layers. Thereafter, the model was trained using the deep transfer learning (DTL) algorithm, and the feature was engineered on average pooling and fully connected layers. The fusion is carried out through discriminative correlation analysis, enhanced by the moth-flame optimization (MFO) technique. Liang et al. [17] examine the effect of every layer on a parallel infrastructured convolutional neural network (CNN). In particular, we slowly freeze the parameter of GaitSet from high to low layers and see the performance of finetuning. Furthermore, the rise of the frozen layer has negative consequences on the performance; it could reach the maximal efficacy with a single convolution layer unfrozen.

The author in [18] proposed a novel architecture for HGR using DL and better feature selection. During the augmentation phase, three flip operations were performed. During the feature extraction phase, two pre-trained models have taken place, NASNet Mobile and Inception-ResNetV2. These two models were trained and fine-tuned utilizing the TL algorithm on the CASIA B gait data. The feature of the selected deep model was enhanced through an adapted three-step whale-optimized algorithm, and the better feature was selected. Hashem et al. [19] proposed an accurate and advanced end-user software system that is capable of identifying individuals in video based on the gait signature for hospital security. TL algorithm based on pretrained CNN was used and capable of extracting deep feature vectors and categorizing people directly rather than a typical representation that includes hand-crafted feature engineering and computing the binary silhouettes.

In [20], proposed a novel fully automatic technique for HGR under different view angles using the DL technique. Four major phases are included: recognition using supervised learning methods, pre-processing of the original video frame, which exploits pre-trained Densenet-201 CNN model for extracting features, and decrease of further features in extraction vector based on a hybrid selection technique. Sharif et al. [21] suggest a method that efficiently deals with the problem related to walking styles and viewing angle shifts in real-time. The subsequent steps are included: (a) feature selection based on the kurtosis-controlled entropy (KcE) method, then a correlation-based feature fusion phase, (b) real-time video capture, and (c) extraction feature utilizing transfer learning (TL) on the ResNet101 deep model. Then, the most discriminative feature was categorized using the innovative ML classifier.

This study designs an Enhanced Artificial Algae Algorithm with Hybrid Deep Learning based Human Gait Classification (EAAA-HDLGR) technique on sEMG signals. The EAAA-HDLGR technique initially extracts the time domain (TD), and frequency domain (FD) features from the sEMG signals and is fused. In addition, the EAAA-HDLGR technique exploits the hybrid deep learning (HDL) model for gait recognition. At last, an EAAA-based hyperparameter optimizer is applied for the HDL model, mainly derived from the quasi-oppositional based learning (QOBL) concept. A brief classifier outcome of the EAAA-HDLGR technique is examined under diverse aspects.

The rest of the paper is organized as follows. Section 2 provides the overall description of the proposed model. Next, Section 3 offers the experimental validation process. Finally, Section 4 concludes the work with major findings.

2  The Proposed Model

In this study, we have derived a new EAAA-HDLGR technique for gait recognition using sEMG signals. Primarily, the EAAA-HDLGR technique derived the TD as well as FD features from the sEMG signals, which are then fused. In addition, the EAAA-HDLGR technique exploited the HDL model for gait recognition. At last, an EAAA-based hyperparameter optimizer is applied for the HDL model, mainly derived from the QOBL concept. Fig. 1 depicts the workflow of the EAAA-HDLGR algorithm.

images

Figure 1: Workflow of EAAA-HDLGR approach

2.1 Feature Extraction Process

Afterwards, de-noising, the TD and FD features of all the channels of the EMG signal can be extracted [22]. During this case, the 3 representative time domain features comprising variance (VAR), zero crossing points (ZC), and mean absolute value (MAV) can be utilized as frequency domain features. MAV gets the benefit of properties in which sEMG signals are huge amplitude fluctuations from the time domain that are linearly compared with muscle activation level. The maximum value of MAV is the superior activation level of muscles.

MAV=1Nk=1N|xk|(1)

whereas, xk(k=1,2,,N) implies the sEMG time series with a window length of N. VAR is the size of signal powers of sEMG signals and is formulated as:

VAR=1N1k=1Nxk2(2)

ZC implies the count of times the sEMG waveform passes with the zero point to avoid signal cross-calculating due to low-level noise. It can be a mathematical process as follows:

ZC=k=1Nsgn(xkxk+1)(3)

whereas, sgn(x)={1x>00otherwise

It can choose 2 representative frequency domain features like average power frequency fmean and median frequency fmf determined as:

fmean=0+fP(f)df0+P(f)df(4)

0fmfP(f)df=fmf+P(f)df=120+P(f)df(5)

whereas P(f) implies the power spectral density of sEMG signals, and f denotes the frequency.

2.2 Gait Classification using HDL Model

During this study, the HDL technique was employed for the gait classifier. It comprises CNN with long short-term memory (LSTM) and bidirectional LSTM (Bi-LSTM) technique [23]. CNN is increasingly popular under the domains of DL and is contained in convolution (Conv) and pooling layers. The Conv layer function is to remove useful features in the input database. Its interior includes one or more Conv kernels. The Conv layer was executed to remove effectual features by sliding the Conv kernel on the feature. Afterwards, enhance a Max-Pooling layer and then the Conv layer; the Max-Pooling layer maintains the strong features and extracts the weaker feature to prevent over-fitting and decrease the complexity. LSTM avoids gradient vanishing and explosion from the trained process of recurrent neural network (RNN), whereas input, output, and forget gates can be projected. The 3 gate functions effectively solve the trained problem of RNN. The input gate determines the data that existing time carries to future time. The forget gate identifies the data count of preceding times which is preserved in the current period. The resultant gate chooses the result of existing to future states. The succeeding equation illustrates the equation of distinct cells from LSTM:

iz=σ(Wxixz+Whihz1+bi)(6)

fz=σ(Wxfxz+Whfhz1+bf)(7)

oz=σ(Wxoxz+Whohz1+bo)(8)

C~z=tanh(Wxcxz+Whchz1+bc)(9)

cz=(fzecz1+izeC~z)(10)

hz=ozetanh(cz)(11)

At this point, Whf, Whc, Who, Whi implies the weighted in the hidden layer to forget, memory cell, output, and input gates. Wxf, Wxo,Wxi, and Wxc signify the weight of the forgetting gate, output gate, input gate, and memory cell. fz,iz,0z illustrate the zth forget, input, and output gates. bo,bf,bi stands for the bias values of a memory cell, output gate, forget gate, and input gate. Bi-LSTM is a group of forward and backward LSTM that could provide relevant data in forward and backward ways and concatenate the prediction. LSTM is suitable the time-related data in one way. BiLSTM improves the opposite way of LSTM in such a method that BiLSTM captures the pattern, which LSTM ignores. LSTM-BiLSTM classifier the gait based on extracting feature.

•   CNN was projected with three 1-D Conv layers, and the count of Conv kernels is set as 16, 32, 64, correspondingly. The Conv kernel size is set as 2, and the striding phase has set as. The effectual feature was extracted with stride from the Conv kernel. Then, add the Max-Pooling layer for all the Conv layers; afterwards, the pooling window size was set as 2, and the stride stage was set as one. The Max-Pooling layer decreases the feature effort and avoids overfitting.

•   By assigning weight to features using the attention process, the attention block enhances the outcome of time series features, restrains the interference of insignificant features, and effectually explains this problem that the process doesn’t judge the effect of vital of various time series features.

•   The extraction feature assumed that the input of 1 Bi-LSTM layer and 2 LSTM layers achieved the classifier outcome.

2.3 Parameter Optimization using EAAA Model

To improve the recognition rate, EAAA based hyperparameter optimizer is applied to the HDL model. Artificial algae are generally known to describe the features of algae and demonstrate that they can be responsive to solutions from the problem spaces [24]. With a real one, artificial algae demonstrate that if it is implemented in the environment by moving to a light source for photosynthesizing by spiral swimming, it can switch superior species and eventually multiply with mitotic division. Thus, this procedure contains 3 important processes Helical Motion, Evolutionary Process, and Adaptation. The term algae colony signifies the collection of algae cells that lives together. Algae colony and population can demonstrate in the subsequent formula.

Population=[χ11χ1DχN1χND](12)

Algaecolony=[χi1,χi2χiD](13)

The algae colony work as separate cell and permits together, and the cell from the colony can pass away an unfavourable level of life. The colony present at the optimal point is named a better colony, and it includes better algae cells. The development kinetics of algae colonies calculated by the Monod approach as demonstrated under.

μ=μmaxSKs+S(14)

In Eq. (14), μ signifies the rate of growth, μmax implies the maximal rate of growth, S represents the nutrient concentration, and K is divided as to algal colony. In Eq. (17), theμmax value is set to one dependent upon the mass conservation model, an entire count converted as to biomass can correspond to the count of substrate expended to all the time units. The size of ith algae colonies at time t+1 from the Monod equation is depicted under:

Git+1=μitGiti=1,2,N(15)

In Eq. (15), Git denotes the size of ith algae colonies at t time, andN represents the amount of algae colonies. The algae colony offers an optimum solution that develops higher as a sign of a greater count of nutrients is obtained. In all the algae cells, the minimal algae colony dies from the evolution procedure, and the algae cell of the maximum algae colony reproduces. Such procedure is complete in the subsequent formulas:

biggestt=maxGiti=1,2,N(16)

smallestt=minGiti=1,2,N(17)

smalleslmt=biggestmtm=1,2,DD(18)

In the above formula, D signifies that there is a problem feature, whereas the maximum describes the largest colony of algae, and the minimal symbolizes the lower colony of algae. A primary hunger value is 0 for every artificial alga, and the Adaptation process stops by an alteration in the hunger level.

starvingt=maxAiti=1,2,N(19)

starvingt+1=slarvingt+(biggeststarvingt)rand(20)

During this equation, the value A signifies the hunger value of ith algae colonies at t time, and starvingt describes the algae colonies with maximal angle value at t time. In Eq. (20), the variation parameter states the adaptation scheme and is utilized at t time. Generally, the variation parameter value lies in the interval of 0 and 1. In AAA, the movement of algae cells are spiral, the viscous drag is realized as shear force commensurate to algae cell size, gravity restricting movement is presented as zero, and viscous drag demonstrates as zero. At this time, τ(X) implies the friction surface, the friction surface is the surface area, and the Algae colony has spherical from the shape demonstrated in the under equation. Fig. 2 depicts the flowchart of AAA.

images

Figure 2: Flowchart of AAA

τ(xi)=2πr2(21)

τ(xi)=2π(33Gi4π)2(22)

The distance to the light source and friction surface is to control the step size of motions.

ximt+1=ximt+(xjmtximt)(Δτt(X))p(23)

xikt+1=xikt+(xjktxikt)(Δτt(xi))cosα(24)

xi1t+1=xi1t+(xj1txi1t)(Δτt(xi))sinβ(25)

For the helical rotation of algae cells, (ximt,xikt, and xi1t) are stated as the coordinate of the algae cell (x,y, and z) at t time. α and β[0,2];p[1,1];Δ the force to the procedure; t (xi) signifies the friction surface area of ith algae cells. The AAO technique arbitrarily creates the primary solution in the search range. Once the solution is superior to the global better solution, afterwards, the convergence population rate can be slower and simply decrease as to the local better solution. Tizhoosh projected oppositional-based learning (OBL) technique for preventing these local better problems [25].

Also, the QOAAO technique resulted by utilizing quasi-opposition-based population initialization. At this point, the random solution was higher than the global better solution if related to their opposite solution. So, the N better individual was chosen in the distinct population containing N arbitrary individuals and the opposite solution of individuals. Let X=(X1,X2,,XD) refer to the point in d dimension space, and its opposite point signifies XOBL=(X1OBL,X2OBL,,XDOBL) which is calculated by the subsequent formula; and its quasi-opposite point XQOBL=(X1QOBL,X2QOBL,,XDQOBL) was measured as:

XdOBL=lbd+ubdXd,(26)

XdQOBL={lbd+ubd2+rand(0,1)×(XdOBLlbd+ubd2)Xd<lbd+ubd2XdOBL+rand(0,1)×(lbd+ubd2XdOBL)Xdlbd+ubd2,(27)

In the formula, the d dimension vector of X was signified as Xd,XdOBL, and XdQOBL denotes the d dimensional opposite point XOBL, and quasi-opposite point XQOBL correspondingly. Also, LBD and ubd illustrate the d dimensional of lower as well as upper limits of problems, correspondingly.

The fitness chosen is a vital feature in the EAAA system. The solution encoder was exploited to assess the aptitude (goodness) of candidate solutions. At this point, the accuracy value is the main form employed to design a fitness function.

Fitness=max(P)(28)

P=TPTP+FP(29)

From the expression, TP represents the true positive, and FP denotes the false positive value.

3  Results and Discussion

In this section, the gait classification results of the EAAA-HDLGR approach are investigated in detail. The proposed model is simulated using Python 3.6.5 tool on PC i5-8600k, GeForce 1050Ti 4GB, 16GB RAM, 250GB SSD, and 1TB HDD. The parameter settings are given as follows: learning rate: 0.01, dropout: 0.5, batch size: 5, epoch count: 50, and activation: ReLU.

Table 1 and Fig. 3 demonstrate the overall gait classification outcomes of the EAAA-HDLGR model with other ML models on TD features [22]. The experimental results inferred that the EAAA-HDLGR model exhibits effectual outcomes. For instance, with the pre-stance stage, the EAAA-HDLGR model has highlighted an increasing accuy of 96.07% while the support vector machine (SVM), extreme learning machine (ELM), LSTM, deep belief network (DBN), and salp swarm algorithm with DBN (SSA-DBN) models have attained reducing accuy of 95%, 96.07%, 93.49%, 93.63%, and 96.99% respectively.

images

images

Figure 3: Gait classifier outcome of EAAA-HDLGR approach on TD features

Meanwhile, with the Terminal-Stance stage, the EAAA-HDLGR approach has emphasized increasing accuy of 98.95% while the SVM, ELM, LSTM, DBN, and SSA-DBN models have attained decreasing accuy of 92.01%, 95.50%, 97.33%, 94.73%, and 95.78% correspondingly. Finally, with the Terminal-Swing stage, the EAAA-HDLGR technique has demonstrated a higher accuy of 98.03% while the SVM, ELM, LSTM, DBN, and SSA-DBN methods have attained lesser accuy of 96.92%, 94.79%, 92.76%, 96.65% and 94.84% correspondingly.

An average accuy assessment of the EAAA-HDLGR model with other models on TD features is given in Fig. 4. The outcomes stated that the EAAA-HDLGR method had reached an improving accuy of 96.88%. Contrastingly, the SVM, ELM, LSTM, DBN, and SSA-DBN models have accomplished degrading accuy of 94.25%, 94.72%, 94.67%, 94%, and 95.23%, respectively.

images

Figure 4: Average outcome of EAAA-HDLGR approach on TD features

Table 2 and Fig. 5 illustrate the overall gait classification outcomes of the EAAA-HDLGR approach with other ML models on FD features. The experimental outcomes inferred that the EAAA-HDLGR system exhibits effectual outcomes. For the sample, with the pre-stance stage, the EAAA-HDLGR method has exhibited a maximal accuy of 96.06% while the SVM, ELM, LSTM, DBN, and SSA-DBN models have attained a lesser accuy of 97.02%, 92.59%, 96.62%, 97.45% and 94.59% correspondingly. In the meantime, with the Terminal-Stance stage, the EAAA-HDLGR system has highlighted an increasing accuy of 97.02% while the SVM, ELM, LSTM, DBN, and SSA-DBN methodologies have attained a reducing accuy of 92.02%, 94.43%, 93.70%, 94.84% and 96.28% correspondingly. Lastly, with the Terminal-Swing stage, the EAAA-HDLGR model has demonstrated an increasing accuy of 97.44% while the SVM, ELM, LSTM, DBN, and SSA-DBN approaches have attained a reducing accuy of 97.45%, 95.47%, 94.38%, 92.54% and 95.29% correspondingly.

images

images

Figure 5: Gait classifier outcome of EAAA-HDLGR approach on FD features

An average accuy analysis of the EAAA-HDLGR approach with other models on FD features is given in Fig. 6. The outcomes pointed out that the EAAA-HDLGR algorithm has reached an improving accuy of 96.77%. Contrastingly, the SVM, ELM, LSTM, DBN, and SSA-DBN models have accomplished degrading accuy of 95.57%, 95.22%, 94.73%, 93.96% and 95.73% correspondingly.

images

Figure 6: Average outcome of EAAA-HDLGR approach on FD features

Table 3 and Fig. 7 showcase the overall gait classification outcomes of the EAAA-HDLGR algorithm with other ML techniques on Fusion features. The experimental results inferred that the EAAA-HDLGR system demonstrates effectual outcomes. For instance, with the pre-stance stage, the EAAA-HDLGR approach has highlighted a higher accuy of 97.66% while the SVM, ELM, LSTM, DBN, and SSA-DBN methods have gained minimal accuy of 94.53%, 94.65%, 93.41%, 97.54% and 95.24% correspondingly. Followed by, with Terminal-Stance stage, the EAAA-HDLGR algorithm has exhibited an increasing accuy of 98.62% while the SVM, ELM, LSTM, DBN, and SSA-DBN models have attained a reducing accuy of 93.59%, 95.77%, 94.02%, 92.36% and 95.50% respectively. Finally, with the Terminal-Swing stage, the EAAA-HDLGR approach has depicted a superior accuy of 97.55% while the SVM, ELM, LSTM, DBN, and SSA-DBN systems have obtained a reducing accuy of 92.77%, 97.00%, 94.89%, 93.75% and 97.81% correspondingly.

images

images

Figure 7: Gait classifier outcome of EAAA-HDLGR approach on fusion features

An average accuy investigation of the EAAA-HDLGR approach with other methodologies on Fusion features is given in Fig. 8. The outcomes stated that the EAAA-HDLGR system has reached an improving accuy of 97.98%. Also, the SVM, ELM, LSTM, DBN, and SSA-DBN systems have accomplished degrading accuy of 93.97%, 95.02%, 94.24%, 95.41% and 95.62% correspondingly.

images

Figure 8: Average outcome of EAAA-HDLGR approach on fusion features

The training accuracy (TACC) and validation accuracy (VACC) of the EAAA-HDLGR approach are investigated on gait classifier performance in Fig. 9. The figure stated that the EAAA-HDLGR methodology has shown higher performance with improved values of TACC and VACC. It is observable that the EAAA-HDLGR methodology has reached maximal TACC outcomes.

images

Figure 9: TACC and VACC outcome of EAAA-HDLGR approach

The training loss (TLS) and validation loss (VLS) of the EAAA-HDLGR system are tested on gait classifier performance in Fig. 10. The figure pointed out that the EAAA-HDLGR algorithm has better performance with minimal values of TLS and VLS. It is noticeable that the EAAA-HDLGR model has resulted in lesser VLS outcomes.

images

Figure 10: TLS and VLS outcome of the EAAA-HDLGR approach

4  Conclusion

In this study, we have derived a new EAAA-HDLGR technique for gait recognition using sEMG signals. Primarily, the EAAA-HDLGR technique derived the TD as well as FD features from the sEMG signals which are then fused. In addition, the EAAA-HDLGR technique exploited the HDL model for gait recognition. At last, an EAAA-based hyperparameter optimizer is applied for the HDL model, which is mainly derived by the use of the QOBL concept. A brief classifier outcome of the EAAA-HDLGR technique is examined under diverse aspects and the results indicated the betterment of the EAAA-HDLGR technique. The results imply that the EAAA-HDLGR technique accomplishes improved results with the inclusion of EAAA on gait recognition. In future, feature reduction and feature selection processes can be combined to boost the recognition rate of the EAAA-HDLGR technique.

Funding Statement: This research was supported by a grant from the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (grant number: HI21C1831) and the Soonchunhyang University Research Fund.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

1. Z. Ni and B. Huang, “Human identification based on natural gait micro‐Doppler signatures using deep transfer learning,” IET Radar, Sonar & Navigation, vol. 14, no. 10, pp. 1640–1646, 2020. [Google Scholar]

2. X. Bai, Y. Hui, L. Wang and F. Zhou, “Radar-based human gait recognition using dual-channel deep convolutional neural network,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 12, pp. 9767–9778, 2019. [Google Scholar]

3. M. Sharif, M. Attique, M. Z. Tahir, M. Yasmim, T. Saba et al., “A machine learning method with threshold based parallel feature fusion and feature selection for automated gait recognition,” Journal of Organizational and End User Computing, vol. 32, no. 2, pp. 67–92, 2020. [Google Scholar]

4. H. Wu, X. Zhang, J. Wu and L. Yan, “Classification algorithm for human walking gait based on multi-sensor feature fusion,” in Int. Conf. on Mechatronics and Intelligent Robotics, Advances in Intelligent Systems and Computing Book Series, Cham, Springer, vol. 856, pp. 718–725, 2019. [Google Scholar]

5. H. Arshad, M. A. Khan, M. Sharif, M. Yasmin and M. Y. Javed, “Multi-level features fusion and selection for human gait recognition: An optimized framework of Bayesian model and binomial distribution,” International Journal of Machine Learning and Cybernetics, vol. 10, no. 12, pp. 3601–3618, 2019. [Google Scholar]

6. N. Mansouri, M. A. Issa and Y. B. Jemaa, “Gait features fusion for efficient automatic age classification,” IET Computer Vision, vol. 12, no. 1, pp. 69–75, 2018. [Google Scholar]

7. D. Thakur and S. Biswas, “Feature fusion using deep learning for smartphone based human activity recognition,” International Journal of Information Technology, vol. 13, no. 4, pp. 1615–1624, 2021. [Google Scholar] [PubMed]

8. F. M. Castro, M. J. Marín-Jiménez, N. Guil and N. Pérez de la Blanca, “Multimodal feature fusion for CNN-based gait recognition: An empirical comparison,” Neural Computing and Applications, vol. 32, no. 17, pp. 14173–14193, 2020. [Google Scholar]

9. M. M. Hasan and H. A. Mustafa, “Multi-level feature fusion for robust pose-based gait recognition using RNN,” International Journal of Computer Science and Information Security, vol. 18, no. 2, pp. 20–31, 2021. [Google Scholar]

10. Y. Lang, Q. Wang, Y. Yang, C. Hou, D. Huang et al., “Unsupervised domain adaptation for micro-Doppler human motion classification via feature fusion,” IEEE Geoscience and Remote Sensing Letters, vol. 16, no. 3, pp. 392–396, 2018. [Google Scholar]

11. M. Li, S. Tian, L. Sun and X. Chen, “Gait analysis for post-stroke hemiparetic patient by multi-features fusion method,” Sensors, vol. 19, no. 7, pp. 1737, 2019. [Google Scholar] [PubMed]

12. A. Abdelbaky and S. Aly, “Two-stream spatiotemporal feature fusion for human action recognition,” The Visual Computer, vol. 37, no. 7, pp. 1821–1835, 2021. [Google Scholar]

13. F. F. Wahid, “Statistical features from frame aggregation and differences for human gait recognition,” Multimedia Tools and Applications, vol. 80, no. 12, pp. 18345–18364, 2021. [Google Scholar]

14. Q. Hong, Z. Wang, J. Chen and B. Huang, “Cross-view gait recognition based on feature fusion,” in IEEE 33rd Int. Conf. on Tools with Artificial Intelligence (ICTAI), Washington, DC, USA, pp. 640–646, 2021. [Google Scholar]

15. K. Sugandhi, F. F. Wahid and G. Raju, “Inter frame statistical feature fusion for human gait recognition,” in Int. Conf. on Data Science and Communication (IconDSC), Bangalore, India, pp. 1–5, 2019. [Google Scholar]

16. M. A. Khan, H. Arshad, R. Damaševičius, A. Alqahtani, S. Alsubai et al., “Human gait analysis: A sequential framework of lightweight deep learning and improved moth-flame optimization algorithm,” Computational Intelligence and Neuroscience, vol. 2022, pp. 1–13, 2022. [Google Scholar]

17. Y. Liang, E. H. K. Yeung and Y. Hu, “Parallel CNN classification for human gait identification with optimal cross data-set transfer learning,” in IEEE Int. Conf. on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), Hong Kong, China, pp. 1–6, 2021. [Google Scholar]

18. F. Saleem, M. A. Khan, M. Alhaisoni, U. Tariq, A. Armghan et al., “Human gait recognition: A single stream optimal deep learning features fusion,” Sensors, vol. 21, no. 22, pp. 7584, 2021. [Google Scholar] [PubMed]

19. L. Hashem, R. Al-Harakeh and A. Cherry, “Human gait identification system based on transfer learning,” in 21st Int. Arab Conf. on Information Technology (ACIT), Giza, Egypt, pp. 1–6, 2020. [Google Scholar]

20. A. Mehmood, M. A. Khan, M. Sharif, S. A. Khan, M. Shaheen et al., “Prosperous human gait recognition: An end-to-end system based on pre-trained CNN features selection,” Multimedia Tools and Applications, pp. 1–21, 2020. https://doi.org/10.1007/s11042-020-08928-0 [Google Scholar] [CrossRef]

21. M. I. Sharif, M. A. Khan, A. Alqahtani, M. Nazir, S. Alsubai et al., “Deep learning and kurtosis-controlled, entropy-based framework for human gait recognition using video sequences,” Electronics, vol. 11, no. 3, pp. 334, 2022. [Google Scholar]

22. J. He, F. Gao, J. Wang, Q. Wu, Q. Zhang et al., “A method combining multi-feature fusion and optimized deep belief network for emg-based human gait classification,” Mathematics, vol. 10, no. 22, pp. 4387, 2022. [Google Scholar]

23. R. J. Kavitha, C. Thiagarajan, P. I. Priya, A. V. Anand, E. A. Al-Ammar et al., “Improved harris hawks optimization with hybrid deep learning based heating and cooling load prediction on residential buildings,” Chemosphere, vol. 309, pp. 136525, 2022. [Google Scholar] [PubMed]

24. K. I. Anwer and S. Servi, “Clustering method based on artificial algae algorithm,” International Journal of Intelligent Systems and Applications in Engineering, vol. 9, no. 4, pp. 136–151, 2021. [Google Scholar]

25. J. Xia, H. Zhang, R. Li, Z. Wang, Z. Cai et al., “Adaptive barebones salp swarm algorithm with quasi-oppositional learning for medical diagnosis systems: A comprehensive analysis,” Journal of Bionic Engineering, vol. 19, no. 1, pp. 240–256, 2022. [Google Scholar]


Cite This Article

APA Style
Anupama, C.S.S., Zakieva, R., Sergin, A., Lydia, E.L., Kadry, S. et al. (2023). Feature fusion based deep transfer learning based human gait classification model. Intelligent Automation & Soft Computing, 37(2), 1453-1468. https://doi.org/10.32604/iasc.2023.038321
Vancouver Style
Anupama CSS, Zakieva R, Sergin A, Lydia EL, Kadry S, Kim C, et al. Feature fusion based deep transfer learning based human gait classification model. Intell Automat Soft Comput . 2023;37(2):1453-1468 https://doi.org/10.32604/iasc.2023.038321
IEEE Style
C.S.S. Anupama et al., "Feature Fusion Based Deep Transfer Learning Based Human Gait Classification Model," Intell. Automat. Soft Comput. , vol. 37, no. 2, pp. 1453-1468. 2023. https://doi.org/10.32604/iasc.2023.038321


cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 756

    View

  • 453

    Download

  • 0

    Like

Share Link