Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (28)
  • Open Access


    Faster Region Convolutional Neural Network (FRCNN) Based Facial Emotion Recognition

    J. Sheril Angel1, A. Diana Andrushia1,*, T. Mary Neebha1, Oussama Accouche2, Louai Saker2, N. Anand3

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 2427-2448, 2024, DOI:10.32604/cmc.2024.047326

    Abstract Facial emotion recognition (FER) has become a focal point of research due to its widespread applications, ranging from human-computer interaction to affective computing. While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets, recent strides in artificial intelligence and deep learning (DL) have ushered in more sophisticated approaches. The research aims to develop a FER system using a Faster Region Convolutional Neural Network (FRCNN) and design a specialized FRCNN architecture tailored for facial emotion recognition, leveraging its ability to capture spatial hierarchies within localized regions of facial… More >

  • Open Access


    Multi-Objective Equilibrium Optimizer for Feature Selection in High-Dimensional English Speech Emotion Recognition

    Liya Yue1, Pei Hu2, Shu-Chuan Chu3, Jeng-Shyang Pan3,4,*

    CMC-Computers, Materials & Continua, Vol.78, No.2, pp. 1957-1975, 2024, DOI:10.32604/cmc.2024.046962

    Abstract Speech emotion recognition (SER) uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions. The number of features acquired with acoustic analysis is extremely high, so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system. The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy. First, we use the information gain and Fisher Score to sort the features extracted from signals. Then, we employ a multi-objective ranking method… More >

  • Open Access


    Exploring Sequential Feature Selection in Deep Bi-LSTM Models for Speech Emotion Recognition

    Fatma Harby1, Mansor Alohali2, Adel Thaljaoui2,3,*, Amira Samy Talaat4

    CMC-Computers, Materials & Continua, Vol.78, No.2, pp. 2689-2719, 2024, DOI:10.32604/cmc.2024.046623

    Abstract Machine Learning (ML) algorithms play a pivotal role in Speech Emotion Recognition (SER), although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state. The examination of the emotional states of speakers holds significant importance in a range of real-time applications, including but not limited to virtual reality, human-robot interaction, emergency centers, and human behavior assessment. Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs. Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients (MFCCs) due to their ability to capture… More >

  • Open Access


    Improved Speech Emotion Recognition Focusing on High-Level Data Representations and Swift Feature Extraction Calculation

    Akmalbek Abdusalomov1, Alpamis Kutlimuratov2, Rashid Nasimov3, Taeg Keun Whangbo1,*

    CMC-Computers, Materials & Continua, Vol.77, No.3, pp. 2915-2933, 2023, DOI:10.32604/cmc.2023.044466

    Abstract The performance of a speech emotion recognition (SER) system is heavily influenced by the efficacy of its feature extraction techniques. The study was designed to advance the field of SER by optimizing feature extraction techniques, specifically through the incorporation of high-resolution Mel-spectrograms and the expedited calculation of Mel Frequency Cepstral Coefficients (MFCC). This initiative aimed to refine the system’s accuracy by identifying and mitigating the shortcomings commonly found in current approaches. Ultimately, the primary objective was to elevate both the intricacy and effectiveness of our SER model, with a focus on augmenting its proficiency in… More >

  • Open Access


    Using Speaker-Specific Emotion Representations in Wav2vec 2.0-Based Modules for Speech Emotion Recognition

    Somin Park1, Mpabulungi Mark1, Bogyung Park2, Hyunki Hong1,*

    CMC-Computers, Materials & Continua, Vol.77, No.1, pp. 1009-1030, 2023, DOI:10.32604/cmc.2023.041332

    Abstract Speech emotion recognition is essential for frictionless human-machine interaction, where machines respond to human instructions with context-aware actions. The properties of individuals’ voices vary with culture, language, gender, and personality. These variations in speaker-specific properties may hamper the performance of standard representations in downstream tasks such as speech emotion recognition (SER). This study demonstrates the significance of speaker-specific speech characteristics and how considering them can be leveraged to improve the performance of SER models. In the proposed approach, two wav2vec-based modules (a speaker-identification network and an emotion classification network) are trained with the Arcface loss.… More >

  • Open Access


    Text Augmentation-Based Model for Emotion Recognition Using Transformers

    Fida Mohammad1,*, Mukhtaj Khan1, Safdar Nawaz Khan Marwat2, Naveed Jan3, Neelam Gohar4, Muhammad Bilal3, Amal Al-Rasheed5

    CMC-Computers, Materials & Continua, Vol.76, No.3, pp. 3523-3547, 2023, DOI:10.32604/cmc.2023.040202

    Abstract Emotion Recognition in Conversations (ERC) is fundamental in creating emotionally intelligent machines. Graph-Based Network (GBN) models have gained popularity in detecting conversational contexts for ERC tasks. However, their limited ability to collect and acquire contextual information hinders their effectiveness. We propose a Text Augmentation-based computational model for recognizing emotions using transformers (TA-MERT) to address this. The proposed model uses the Multimodal Emotion Lines Dataset (MELD), which ensures a balanced representation for recognizing human emotions. The model used text augmentation techniques to produce more training data, improving the proposed model’s accuracy. Transformer encoders train the deep… More >

  • Open Access


    Deep Facial Emotion Recognition Using Local Features Based on Facial Landmarks for Security System

    Youngeun An, Jimin Lee, EunSang Bak*, Sungbum Pan*

    CMC-Computers, Materials & Continua, Vol.76, No.2, pp. 1817-1832, 2023, DOI:10.32604/cmc.2023.039460

    Abstract Emotion recognition based on facial expressions is one of the most critical elements of human-machine interfaces. Most conventional methods for emotion recognition using facial expressions use the entire facial image to extract features and then recognize specific emotions through a pre-trained model. In contrast, this paper proposes a novel feature vector extraction method using the Euclidean distance between the landmarks changing their positions according to facial expressions, especially around the eyes, eyebrows, nose, and mouth. Then, we apply a new classifier using an ensemble network to increase emotion recognition accuracy. The emotion recognition performance was More >

  • Open Access


    A Method of Multimodal Emotion Recognition in Video Learning Based on Knowledge Enhancement

    Hanmin Ye1,2, Yinghui Zhou1, Xiaomei Tao3,*

    Computer Systems Science and Engineering, Vol.47, No.2, pp. 1709-1732, 2023, DOI:10.32604/csse.2023.039186

    Abstract With the popularity of online learning and due to the significant influence of emotion on the learning effect, more and more researches focus on emotion recognition in online learning. Most of the current research uses the comments of the learning platform or the learner’s expression for emotion recognition. The research data on other modalities are scarce. Most of the studies also ignore the impact of instructional videos on learners and the guidance of knowledge on data. Because of the need for other modal research data, we construct a synchronous multimodal data set for analyzing learners’… More >

  • Open Access


    TC-Net: A Modest & Lightweight Emotion Recognition System Using Temporal Convolution Network

    Muhammad Ishaq1, Mustaqeem Khan1,2, Soonil Kwon1,*

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 3355-3369, 2023, DOI:10.32604/csse.2023.037373

    Abstract Speech signals play an essential role in communication and provide an efficient way to exchange information between humans and machines. Speech Emotion Recognition (SER) is one of the critical sources for human evaluation, which is applicable in many real-world applications such as healthcare, call centers, robotics, safety, and virtual reality. This work developed a novel TCN-based emotion recognition system using speech signals through a spatial-temporal convolution network to recognize the speaker’s emotional state. The authors designed a Temporal Convolutional Network (TCN) core block to recognize long-term dependencies in speech signals and then feed these temporal More >

  • Open Access


    Facial Emotion Recognition Using Swarm Optimized Multi-Dimensional DeepNets with Losses Calculated by Cross Entropy Function

    A. N. Arun1,*, P. Maheswaravenkatesh2, T. Jayasankar2

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 3285-3301, 2023, DOI:10.32604/csse.2023.035356

    Abstract The human face forms a canvas wherein various non-verbal expressions are communicated. These expressional cues and verbal communication represent the accurate perception of the actual intent. In many cases, a person may present an outward expression that might differ from the genuine emotion or the feeling that the person experiences. Even when people try to hide these emotions, the real emotions that are internally felt might reflect as facial expressions in the form of micro expressions. These micro expressions cannot be masked and reflect the actual emotional state of a person under study. Such micro… More >

Displaying 1-10 on page 1 of 28. Per Page