Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (28)
  • Open Access


    Parameter Tuned Machine Learning Based Emotion Recognition on Arabic Twitter Data

    Ibrahim M. Alwayle1, Badriyya B. Al-onazi2, Jaber S. Alzahrani3, Khaled M. Alalayah1, Khadija M. Alaidarous1, Ibrahim Abdulrab Ahmed4, Mahmoud Othman5, Abdelwahed Motwakel6,*

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 3423-3438, 2023, DOI:10.32604/csse.2023.033834

    Abstract Arabic is one of the most spoken languages across the globe. However, there are fewer studies concerning Sentiment Analysis (SA) in Arabic. In recent years, the detected sentiments and emotions expressed in tweets have received significant interest. The substantial role played by the Arab region in international politics and the global economy has urged the need to examine the sentiments and emotions in the Arabic language. Two common models are available: Machine Learning and lexicon-based approaches to address emotion classification problems. With this motivation, the current research article develops a Teaching and Learning Optimization with… More >

  • Open Access


    A Multi-Modal Deep Learning Approach for Emotion Recognition

    H. M. Shahzad1,3, Sohail Masood Bhatti1,3,*, Arfan Jaffar1,3, Muhammad Rashid2

    Intelligent Automation & Soft Computing, Vol.36, No.2, pp. 1561-1570, 2023, DOI:10.32604/iasc.2023.032525

    Abstract In recent years, research on facial expression recognition (FER) under mask is trending. Wearing a mask for protection from Covid 19 has become a compulsion and it hides the facial expressions that is why FER under the mask is a difficult task. The prevailing unimodal techniques for facial recognition are not up to the mark in terms of good results for the masked face, however, a multimodal technique can be employed to generate better results. We proposed a multimodal methodology based on deep learning for facial recognition under a masked face using facial and vocal… More >

  • Open Access


    A Multi-Level Circulant Cross-Modal Transformer for Multimodal Speech Emotion Recognition

    Peizhu Gong1, Jin Liu1, Zhongdai Wu2, Bing Han2, Y. Ken Wang3, Huihua He4,*

    CMC-Computers, Materials & Continua, Vol.74, No.2, pp. 4203-4220, 2023, DOI:10.32604/cmc.2023.028291

    Abstract Speech emotion recognition, as an important component of human-computer interaction technology, has received increasing attention. Recent studies have treated emotion recognition of speech signals as a multimodal task, due to its inclusion of the semantic features of two different modalities, i.e., audio and text. However, existing methods often fail in effectively represent features and capture correlations. This paper presents a multi-level circulant cross-modal Transformer (MLCCT) for multimodal speech emotion recognition. The proposed model can be divided into three steps, feature extraction, interaction and fusion. Self-supervised embedding models are introduced for feature extraction, which give a… More >

  • Open Access


    Performance Analysis of a Chunk-Based Speech Emotion Recognition Model Using RNN

    Hyun-Sam Shin1, Jun-Ki Hong2,*

    Intelligent Automation & Soft Computing, Vol.36, No.1, pp. 235-248, 2023, DOI:10.32604/iasc.2023.033082

    Abstract Recently, artificial-intelligence-based automatic customer response system has been widely used instead of customer service representatives. Therefore, it is important for automatic customer service to promptly recognize emotions in a customer’s voice to provide the appropriate service accordingly. Therefore, we analyzed the performance of the emotion recognition (ER) accuracy as a function of the simulation time using the proposed chunk-based speech ER (CSER) model. The proposed CSER model divides voice signals into 3-s long chunks to efficiently recognize characteristically inherent emotions in the customer’s voice. We evaluated the performance of the ER of voice signal chunks More >

  • Open Access


    The Efficacy of Deep Learning-Based Mixed Model for Speech Emotion Recognition

    Mohammad Amaz Uddin1, Mohammad Salah Uddin Chowdury1, Mayeen Uddin Khandaker2,*, Nissren Tamam3, Abdelmoneim Sulieman4

    CMC-Computers, Materials & Continua, Vol.74, No.1, pp. 1709-1722, 2023, DOI:10.32604/cmc.2023.031177

    Abstract Human speech indirectly represents the mental state or emotion of others. The use of Artificial Intelligence (AI)-based techniques may bring revolution in this modern era by recognizing emotion from speech. In this study, we introduced a robust method for emotion recognition from human speech using a well-performed preprocessing technique together with the deep learning-based mixed model consisting of Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN). About 2800 audio files were extracted from the Toronto emotional speech set (TESS) database for this study. A high pass and Savitzky Golay Filter have been used to More >

  • Open Access


    Multilayer Neural Network Based Speech Emotion Recognition for Smart Assistance

    Sandeep Kumar1, MohdAnul Haq2, Arpit Jain3, C. Andy Jason4, Nageswara Rao Moparthi1, Nitin Mittal5, Zamil S. Alzamil2,*

    CMC-Computers, Materials & Continua, Vol.74, No.1, pp. 1523-1540, 2023, DOI:10.32604/cmc.2023.028631

    Abstract Day by day, biometric-based systems play a vital role in our daily lives. This paper proposed an intelligent assistant intended to identify emotions via voice message. A biometric system has been developed to detect human emotions based on voice recognition and control a few electronic peripherals for alert actions. This proposed smart assistant aims to provide a support to the people through buzzer and light emitting diodes (LED) alert signals and it also keep track of the places like households, hospitals and remote areas, etc. The proposed approach is able to detect seven emotions: worry,… More >

  • Open Access


    Emotion Recognition from Occluded Facial Images Using Deep Ensemble Model

    Zia Ullah1, Muhammad Ismail Mohmand1, Sadaqat ur Rehman2,*, Muhammad Zubair3, Maha Driss4, Wadii Boulila5, Rayan Sheikh2, Ibrahim Alwawi6

    CMC-Computers, Materials & Continua, Vol.73, No.3, pp. 4465-4487, 2022, DOI:10.32604/cmc.2022.029101

    Abstract Facial expression recognition has been a hot topic for decades, but high intraclass variation makes it challenging. To overcome intraclass variation for visual recognition, we introduce a novel fusion methodology, in which the proposed model first extract features followed by feature fusion. Specifically, RestNet-50, VGG-19, and Inception-V3 is used to ensure feature learning followed by feature fusion. Finally, the three feature extraction models are utilized using Ensemble Learning techniques for final expression classification. The representation learnt by the proposed methodology is robust to occlusions and pose variations and offers promising accuracy. To evaluate the efficiency More >

  • Open Access


    Apex Frame Spotting Using Attention Networks for Micro-Expression Recognition System

    Ng Lai Yee1, Mohd Asyraf Zulkifley2,*, Adhi Harmoko Saputro3, Siti Raihanah Abdani4

    CMC-Computers, Materials & Continua, Vol.73, No.3, pp. 5331-5348, 2022, DOI:10.32604/cmc.2022.028801

    Abstract Micro-expression is manifested through subtle and brief facial movements that relay the genuine person’s hidden emotion. In a sequence of videos, there is a frame that captures the maximum facial differences, which is called the apex frame. Therefore, apex frame spotting is a crucial sub-module in a micro-expression recognition system. However, this spotting task is very challenging due to the characteristics of micro-expression that occurs in a short duration with low-intensity muscle movements. Moreover, most of the existing automated works face difficulties in differentiating micro-expressions from other facial movements. Therefore, this paper presents a deep… More >

  • Open Access


    EEG Emotion Recognition Using an Attention Mechanism Based on an Optimized Hybrid Model

    Huiping Jiang1,*, Demeng Wu1, Xingqun Tang1, Zhongjie Li1, Wenbo Wu2

    CMC-Computers, Materials & Continua, Vol.73, No.2, pp. 2697-2712, 2022, DOI:10.32604/cmc.2022.027856

    Abstract Emotions serve various functions. The traditional emotion recognition methods are based primarily on readily accessible facial expressions, gestures, and voice signals. However, it is often challenging to ensure that these non-physical signals are valid and reliable in practical applications. Electroencephalogram (EEG) signals are more successful than other signal recognition methods in recognizing these characteristics in real-time since they are difficult to camouflage. Although EEG signals are commonly used in current emotional recognition research, the accuracy is low when using traditional methods. Therefore, this study presented an optimized hybrid pattern with an attention mechanism (FFT_CLA) for… More >

  • Open Access


    Design of Hierarchical Classifier to Improve Speech Emotion Recognition

    P. Vasuki*

    Computer Systems Science and Engineering, Vol.44, No.1, pp. 19-33, 2023, DOI:10.32604/csse.2023.024441

    Abstract Automatic Speech Emotion Recognition (SER) is used to recognize emotion from speech automatically. Speech Emotion recognition is working well in a laboratory environment but real-time emotion recognition has been influenced by the variations in gender, age, the cultural and acoustical background of the speaker. The acoustical resemblance between emotional expressions further increases the complexity of recognition. Many recent research works are concentrated to address these effects individually. Instead of addressing every influencing attribute individually, we would like to design a system, which reduces the effect that arises on any factor. We propose a two-level Hierarchical… More >

Displaying 11-20 on page 2 of 28. Per Page