Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (35)
  • Open Access

    ARTICLE

    Robust Audio-Visual Fusion for Emotion Recognition Based on Cross-Modal Learning under Noisy Conditions

    A-Seong Moon1, Seungyeon Jeong1, Donghee Kim1, Mohd Asyraf Zulkifley2, Bong-Soo Sohn3,*, Jaesung Lee1,*

    CMC-Computers, Materials & Continua, Vol.85, No.2, pp. 2851-2872, 2025, DOI:10.32604/cmc.2025.067103 - 23 September 2025

    Abstract Emotion recognition under uncontrolled and noisy environments presents persistent challenges in the design of emotionally responsive systems. The current study introduces an audio-visual recognition framework designed to address performance degradation caused by environmental interference, such as background noise, overlapping speech, and visual obstructions. The proposed framework employs a structured fusion approach, combining early-stage feature-level integration with decision-level coordination guided by temporal attention mechanisms. Audio data are transformed into mel-spectrogram representations, and visual data are represented as raw frame sequences. Spatial and temporal features are extracted through convolutional and transformer-based encoders, allowing the framework to capture… More > Graphic Abstract

    Robust Audio-Visual Fusion for Emotion Recognition Based on Cross-Modal Learning under Noisy Conditions

  • Open Access

    ARTICLE

    Does problematic mobile phone use affect facial emotion recognition?

    Bowei Go, Xianli An*

    Journal of Psychology in Africa, Vol.35, No.4, pp. 523-533, 2025, DOI:10.32604/jpa.2025.070123 - 17 August 2025

    Abstract This study investigated the impact of problematic mobile phone use (PMPU) on emotion recognition. The PMPU levels of 150 participants were measured using the standardized SAS-SV scale. Based on the SAS-SV cutoff scores, participants were divided into PMPU and Control groups. These participants completed two emotion recognition experiments involving facial emotion stimuli that had been manipulated to varying emotional intensities using Morph software. Experiment 1 (n = 75) assessed differences in facial emotion detection accuracy. Experiment 2 (n = 75), based on signal detection theory, examined differences in hit and false alarm rates across emotional expressions. More >

  • Open Access

    ARTICLE

    EEG Scalogram Analysis in Emotion Recognition: A Swin Transformer and TCN-Based Approach

    Selime Tuba Pesen, Mehmet Ali Altuncu*

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 5597-5611, 2025, DOI:10.32604/cmc.2025.066702 - 30 July 2025

    Abstract EEG signals are widely used in emotion recognition due to their ability to reflect involuntary physiological responses. However, the high dimensionality of EEG signals and their continuous variability in the time-frequency plane make their analysis challenging. Therefore, advanced deep learning methods are needed to extract meaningful features and improve classification performance. This study proposes a hybrid model that integrates the Swin Transformer and Temporal Convolutional Network (TCN) mechanisms for EEG-based emotion recognition. EEG signals are first converted into scalogram images using Continuous Wavelet Transform (CWT), and classification is performed on these images. Swin Transformer is… More >

  • Open Access

    ARTICLE

    Dual-Task Contrastive Meta-Learning for Few-Shot Cross-Domain Emotion Recognition

    Yujiao Tang1, Yadong Wu1,*, Yuanmei He2, Jilin Liu1, Weihan Zhang1

    CMC-Computers, Materials & Continua, Vol.82, No.2, pp. 2331-2352, 2025, DOI:10.32604/cmc.2024.059115 - 17 February 2025

    Abstract Emotion recognition plays a crucial role in various fields and is a key task in natural language processing (NLP). The objective is to identify and interpret emotional expressions in text. However, traditional emotion recognition approaches often struggle in few-shot cross-domain scenarios due to their limited capacity to generalize semantic features across different domains. Additionally, these methods face challenges in accurately capturing complex emotional states, particularly those that are subtle or implicit. To overcome these limitations, we introduce a novel approach called Dual-Task Contrastive Meta-Learning (DTCML). This method combines meta-learning and contrastive learning to improve emotion… More >

  • Open Access

    ARTICLE

    Multi-Head Encoder Shared Model Integrating Intent and Emotion for Dialogue Summarization

    Xinlai Xing, Junliang Chen*, Xiaochuan Zhang, Shuran Zhou, Runqing Zhang

    CMC-Computers, Materials & Continua, Vol.82, No.2, pp. 2275-2292, 2025, DOI:10.32604/cmc.2024.056877 - 17 February 2025

    Abstract In task-oriented dialogue systems, intent, emotion, and actions are crucial elements of user activity. Analyzing the relationships among these elements to control and manage task-oriented dialogue systems is a challenging task. However, previous work has primarily focused on the independent recognition of user intent and emotion, making it difficult to simultaneously track both aspects in the dialogue tracking module and to effectively utilize user emotions in subsequent dialogue strategies. We propose a Multi-Head Encoder Shared Model (MESM) that dynamically integrates features from emotion and intent encoders through a feature fusioner. Addressing the scarcity of datasets More >

  • Open Access

    REVIEW

    Comprehensive Review and Analysis on Facial Emotion Recognition: Performance Insights into Deep and Traditional Learning with Current Updates and Challenges

    Amjad Rehman1, Muhammad Mujahid1, Alex Elyassih1, Bayan AlGhofaily1, Saeed Ali Omer Bahaj2,*

    CMC-Computers, Materials & Continua, Vol.82, No.1, pp. 41-72, 2025, DOI:10.32604/cmc.2024.058036 - 03 January 2025

    Abstract In computer vision and artificial intelligence, automatic facial expression-based emotion identification of humans has become a popular research and industry problem. Recent demonstrations and applications in several fields, including computer games, smart homes, expression analysis, gesture recognition, surveillance films, depression therapy, patient monitoring, anxiety, and others, have brought attention to its significant academic and commercial importance. This study emphasizes research that has only employed facial images for face expression recognition (FER), because facial expressions are a basic way that people communicate meaning to each other. The immense achievement of deep learning has resulted in a… More >

  • Open Access

    ARTICLE

    Occluded Gait Emotion Recognition Based on Multi-Scale Suppression Graph Convolutional Network

    Yuxiang Zou1, Ning He2,*, Jiwu Sun1, Xunrui Huang1, Wenhua Wang1

    CMC-Computers, Materials & Continua, Vol.82, No.1, pp. 1255-1276, 2025, DOI:10.32604/cmc.2024.055732 - 03 January 2025

    Abstract In recent years, gait-based emotion recognition has been widely applied in the field of computer vision. However, existing gait emotion recognition methods typically rely on complete human skeleton data, and their accuracy significantly declines when the data is occluded. To enhance the accuracy of gait emotion recognition under occlusion, this paper proposes a Multi-scale Suppression Graph Convolutional Network (MS-GCN). The MS-GCN consists of three main components: Joint Interpolation Module (JI Moudle), Multi-scale Temporal Convolution Network (MS-TCN), and Suppression Graph Convolutional Network (SGCN). The JI Module completes the spatially occluded skeletal joints using the (K-Nearest Neighbors)… More >

  • Open Access

    ARTICLE

    Faster Region Convolutional Neural Network (FRCNN) Based Facial Emotion Recognition

    J. Sheril Angel1, A. Diana Andrushia1,*, T. Mary Neebha1, Oussama Accouche2, Louai Saker2, N. Anand3

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 2427-2448, 2024, DOI:10.32604/cmc.2024.047326 - 15 May 2024

    Abstract Facial emotion recognition (FER) has become a focal point of research due to its widespread applications, ranging from human-computer interaction to affective computing. While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets, recent strides in artificial intelligence and deep learning (DL) have ushered in more sophisticated approaches. The research aims to develop a FER system using a Faster Region Convolutional Neural Network (FRCNN) and design a specialized FRCNN architecture tailored for facial emotion recognition, leveraging its ability to capture spatial hierarchies within localized regions of facial… More >

  • Open Access

    ARTICLE

    Multi-Objective Equilibrium Optimizer for Feature Selection in High-Dimensional English Speech Emotion Recognition

    Liya Yue1, Pei Hu2, Shu-Chuan Chu3, Jeng-Shyang Pan3,4,*

    CMC-Computers, Materials & Continua, Vol.78, No.2, pp. 1957-1975, 2024, DOI:10.32604/cmc.2024.046962 - 27 February 2024

    Abstract Speech emotion recognition (SER) uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions. The number of features acquired with acoustic analysis is extremely high, so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system. The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy. First, we use the information gain and Fisher Score to sort the features extracted from signals. Then, we employ a multi-objective ranking method… More >

  • Open Access

    ARTICLE

    Exploring Sequential Feature Selection in Deep Bi-LSTM Models for Speech Emotion Recognition

    Fatma Harby1, Mansor Alohali2, Adel Thaljaoui2,3,*, Amira Samy Talaat4

    CMC-Computers, Materials & Continua, Vol.78, No.2, pp. 2689-2719, 2024, DOI:10.32604/cmc.2024.046623 - 27 February 2024

    Abstract Machine Learning (ML) algorithms play a pivotal role in Speech Emotion Recognition (SER), although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state. The examination of the emotional states of speakers holds significant importance in a range of real-time applications, including but not limited to virtual reality, human-robot interaction, emergency centers, and human behavior assessment. Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs. Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients (MFCCs) due to their ability to capture… More >

Displaying 1-10 on page 1 of 35. Per Page