Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (30)
  • Open Access

    REVIEW

    Comprehensive Review and Analysis on Facial Emotion Recognition: Performance Insights into Deep and Traditional Learning with Current Updates and Challenges

    Amjad Rehman1, Muhammad Mujahid1, Alex Elyassih1, Bayan AlGhofaily1, Saeed Ali Omer Bahaj2,*

    CMC-Computers, Materials & Continua, Vol.82, No.1, pp. 41-72, 2025, DOI:10.32604/cmc.2024.058036 - 03 January 2025

    Abstract In computer vision and artificial intelligence, automatic facial expression-based emotion identification of humans has become a popular research and industry problem. Recent demonstrations and applications in several fields, including computer games, smart homes, expression analysis, gesture recognition, surveillance films, depression therapy, patient monitoring, anxiety, and others, have brought attention to its significant academic and commercial importance. This study emphasizes research that has only employed facial images for face expression recognition (FER), because facial expressions are a basic way that people communicate meaning to each other. The immense achievement of deep learning has resulted in a… More >

  • Open Access

    ARTICLE

    Occluded Gait Emotion Recognition Based on Multi-Scale Suppression Graph Convolutional Network

    Yuxiang Zou1, Ning He2,*, Jiwu Sun1, Xunrui Huang1, Wenhua Wang1

    CMC-Computers, Materials & Continua, Vol.82, No.1, pp. 1255-1276, 2025, DOI:10.32604/cmc.2024.055732 - 03 January 2025

    Abstract In recent years, gait-based emotion recognition has been widely applied in the field of computer vision. However, existing gait emotion recognition methods typically rely on complete human skeleton data, and their accuracy significantly declines when the data is occluded. To enhance the accuracy of gait emotion recognition under occlusion, this paper proposes a Multi-scale Suppression Graph Convolutional Network (MS-GCN). The MS-GCN consists of three main components: Joint Interpolation Module (JI Moudle), Multi-scale Temporal Convolution Network (MS-TCN), and Suppression Graph Convolutional Network (SGCN). The JI Module completes the spatially occluded skeletal joints using the (K-Nearest Neighbors)… More >

  • Open Access

    ARTICLE

    Faster Region Convolutional Neural Network (FRCNN) Based Facial Emotion Recognition

    J. Sheril Angel1, A. Diana Andrushia1,*, T. Mary Neebha1, Oussama Accouche2, Louai Saker2, N. Anand3

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 2427-2448, 2024, DOI:10.32604/cmc.2024.047326 - 15 May 2024

    Abstract Facial emotion recognition (FER) has become a focal point of research due to its widespread applications, ranging from human-computer interaction to affective computing. While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets, recent strides in artificial intelligence and deep learning (DL) have ushered in more sophisticated approaches. The research aims to develop a FER system using a Faster Region Convolutional Neural Network (FRCNN) and design a specialized FRCNN architecture tailored for facial emotion recognition, leveraging its ability to capture spatial hierarchies within localized regions of facial… More >

  • Open Access

    ARTICLE

    Multi-Objective Equilibrium Optimizer for Feature Selection in High-Dimensional English Speech Emotion Recognition

    Liya Yue1, Pei Hu2, Shu-Chuan Chu3, Jeng-Shyang Pan3,4,*

    CMC-Computers, Materials & Continua, Vol.78, No.2, pp. 1957-1975, 2024, DOI:10.32604/cmc.2024.046962 - 27 February 2024

    Abstract Speech emotion recognition (SER) uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions. The number of features acquired with acoustic analysis is extremely high, so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system. The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy. First, we use the information gain and Fisher Score to sort the features extracted from signals. Then, we employ a multi-objective ranking method… More >

  • Open Access

    ARTICLE

    Exploring Sequential Feature Selection in Deep Bi-LSTM Models for Speech Emotion Recognition

    Fatma Harby1, Mansor Alohali2, Adel Thaljaoui2,3,*, Amira Samy Talaat4

    CMC-Computers, Materials & Continua, Vol.78, No.2, pp. 2689-2719, 2024, DOI:10.32604/cmc.2024.046623 - 27 February 2024

    Abstract Machine Learning (ML) algorithms play a pivotal role in Speech Emotion Recognition (SER), although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state. The examination of the emotional states of speakers holds significant importance in a range of real-time applications, including but not limited to virtual reality, human-robot interaction, emergency centers, and human behavior assessment. Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs. Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients (MFCCs) due to their ability to capture… More >

  • Open Access

    ARTICLE

    Improved Speech Emotion Recognition Focusing on High-Level Data Representations and Swift Feature Extraction Calculation

    Akmalbek Abdusalomov1, Alpamis Kutlimuratov2, Rashid Nasimov3, Taeg Keun Whangbo1,*

    CMC-Computers, Materials & Continua, Vol.77, No.3, pp. 2915-2933, 2023, DOI:10.32604/cmc.2023.044466 - 26 December 2023

    Abstract The performance of a speech emotion recognition (SER) system is heavily influenced by the efficacy of its feature extraction techniques. The study was designed to advance the field of SER by optimizing feature extraction techniques, specifically through the incorporation of high-resolution Mel-spectrograms and the expedited calculation of Mel Frequency Cepstral Coefficients (MFCC). This initiative aimed to refine the system’s accuracy by identifying and mitigating the shortcomings commonly found in current approaches. Ultimately, the primary objective was to elevate both the intricacy and effectiveness of our SER model, with a focus on augmenting its proficiency in… More >

  • Open Access

    ARTICLE

    Using Speaker-Specific Emotion Representations in Wav2vec 2.0-Based Modules for Speech Emotion Recognition

    Somin Park1, Mpabulungi Mark1, Bogyung Park2, Hyunki Hong1,*

    CMC-Computers, Materials & Continua, Vol.77, No.1, pp. 1009-1030, 2023, DOI:10.32604/cmc.2023.041332 - 31 October 2023

    Abstract Speech emotion recognition is essential for frictionless human-machine interaction, where machines respond to human instructions with context-aware actions. The properties of individuals’ voices vary with culture, language, gender, and personality. These variations in speaker-specific properties may hamper the performance of standard representations in downstream tasks such as speech emotion recognition (SER). This study demonstrates the significance of speaker-specific speech characteristics and how considering them can be leveraged to improve the performance of SER models. In the proposed approach, two wav2vec-based modules (a speaker-identification network and an emotion classification network) are trained with the Arcface loss.… More >

  • Open Access

    ARTICLE

    Text Augmentation-Based Model for Emotion Recognition Using Transformers

    Fida Mohammad1,*, Mukhtaj Khan1, Safdar Nawaz Khan Marwat2, Naveed Jan3, Neelam Gohar4, Muhammad Bilal3, Amal Al-Rasheed5

    CMC-Computers, Materials & Continua, Vol.76, No.3, pp. 3523-3547, 2023, DOI:10.32604/cmc.2023.040202 - 08 October 2023

    Abstract Emotion Recognition in Conversations (ERC) is fundamental in creating emotionally intelligent machines. Graph-Based Network (GBN) models have gained popularity in detecting conversational contexts for ERC tasks. However, their limited ability to collect and acquire contextual information hinders their effectiveness. We propose a Text Augmentation-based computational model for recognizing emotions using transformers (TA-MERT) to address this. The proposed model uses the Multimodal Emotion Lines Dataset (MELD), which ensures a balanced representation for recognizing human emotions. The model used text augmentation techniques to produce more training data, improving the proposed model’s accuracy. Transformer encoders train the deep… More >

  • Open Access

    ARTICLE

    Deep Facial Emotion Recognition Using Local Features Based on Facial Landmarks for Security System

    Youngeun An, Jimin Lee, EunSang Bak*, Sungbum Pan*

    CMC-Computers, Materials & Continua, Vol.76, No.2, pp. 1817-1832, 2023, DOI:10.32604/cmc.2023.039460 - 30 August 2023

    Abstract Emotion recognition based on facial expressions is one of the most critical elements of human-machine interfaces. Most conventional methods for emotion recognition using facial expressions use the entire facial image to extract features and then recognize specific emotions through a pre-trained model. In contrast, this paper proposes a novel feature vector extraction method using the Euclidean distance between the landmarks changing their positions according to facial expressions, especially around the eyes, eyebrows, nose, and mouth. Then, we apply a new classifier using an ensemble network to increase emotion recognition accuracy. The emotion recognition performance was More >

  • Open Access

    ARTICLE

    A Method of Multimodal Emotion Recognition in Video Learning Based on Knowledge Enhancement

    Hanmin Ye1,2, Yinghui Zhou1, Xiaomei Tao3,*

    Computer Systems Science and Engineering, Vol.47, No.2, pp. 1709-1732, 2023, DOI:10.32604/csse.2023.039186 - 28 July 2023

    Abstract With the popularity of online learning and due to the significant influence of emotion on the learning effect, more and more researches focus on emotion recognition in online learning. Most of the current research uses the comments of the learning platform or the learner’s expression for emotion recognition. The research data on other modalities are scarce. Most of the studies also ignore the impact of instructional videos on learners and the guidance of knowledge on data. Because of the need for other modal research data, we construct a synchronous multimodal data set for analyzing learners’… More >

Displaying 1-10 on page 1 of 30. Per Page