Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (32)
  • Open Access

    ARTICLE

    Deep Facial Emotion Recognition Using Local Features Based on Facial Landmarks for Security System

    Youngeun An, Jimin Lee, EunSang Bak*, Sungbum Pan*

    CMC-Computers, Materials & Continua, Vol.76, No.2, pp. 1817-1832, 2023, DOI:10.32604/cmc.2023.039460 - 30 August 2023

    Abstract Emotion recognition based on facial expressions is one of the most critical elements of human-machine interfaces. Most conventional methods for emotion recognition using facial expressions use the entire facial image to extract features and then recognize specific emotions through a pre-trained model. In contrast, this paper proposes a novel feature vector extraction method using the Euclidean distance between the landmarks changing their positions according to facial expressions, especially around the eyes, eyebrows, nose, and mouth. Then, we apply a new classifier using an ensemble network to increase emotion recognition accuracy. The emotion recognition performance was More >

  • Open Access

    ARTICLE

    A Method of Multimodal Emotion Recognition in Video Learning Based on Knowledge Enhancement

    Hanmin Ye1,2, Yinghui Zhou1, Xiaomei Tao3,*

    Computer Systems Science and Engineering, Vol.47, No.2, pp. 1709-1732, 2023, DOI:10.32604/csse.2023.039186 - 28 July 2023

    Abstract With the popularity of online learning and due to the significant influence of emotion on the learning effect, more and more researches focus on emotion recognition in online learning. Most of the current research uses the comments of the learning platform or the learner’s expression for emotion recognition. The research data on other modalities are scarce. Most of the studies also ignore the impact of instructional videos on learners and the guidance of knowledge on data. Because of the need for other modal research data, we construct a synchronous multimodal data set for analyzing learners’… More >

  • Open Access

    ARTICLE

    TC-Net: A Modest & Lightweight Emotion Recognition System Using Temporal Convolution Network

    Muhammad Ishaq1, Mustaqeem Khan1,2, Soonil Kwon1,*

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 3355-3369, 2023, DOI:10.32604/csse.2023.037373 - 03 April 2023

    Abstract Speech signals play an essential role in communication and provide an efficient way to exchange information between humans and machines. Speech Emotion Recognition (SER) is one of the critical sources for human evaluation, which is applicable in many real-world applications such as healthcare, call centers, robotics, safety, and virtual reality. This work developed a novel TCN-based emotion recognition system using speech signals through a spatial-temporal convolution network to recognize the speaker’s emotional state. The authors designed a Temporal Convolutional Network (TCN) core block to recognize long-term dependencies in speech signals and then feed these temporal More >

  • Open Access

    ARTICLE

    Facial Emotion Recognition Using Swarm Optimized Multi-Dimensional DeepNets with Losses Calculated by Cross Entropy Function

    A. N. Arun1,*, P. Maheswaravenkatesh2, T. Jayasankar2

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 3285-3301, 2023, DOI:10.32604/csse.2023.035356 - 03 April 2023

    Abstract The human face forms a canvas wherein various non-verbal expressions are communicated. These expressional cues and verbal communication represent the accurate perception of the actual intent. In many cases, a person may present an outward expression that might differ from the genuine emotion or the feeling that the person experiences. Even when people try to hide these emotions, the real emotions that are internally felt might reflect as facial expressions in the form of micro expressions. These micro expressions cannot be masked and reflect the actual emotional state of a person under study. Such micro… More >

  • Open Access

    ARTICLE

    Parameter Tuned Machine Learning Based Emotion Recognition on Arabic Twitter Data

    Ibrahim M. Alwayle1, Badriyya B. Al-onazi2, Jaber S. Alzahrani3, Khaled M. Alalayah1, Khadija M. Alaidarous1, Ibrahim Abdulrab Ahmed4, Mahmoud Othman5, Abdelwahed Motwakel6,*

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 3423-3438, 2023, DOI:10.32604/csse.2023.033834 - 03 April 2023

    Abstract Arabic is one of the most spoken languages across the globe. However, there are fewer studies concerning Sentiment Analysis (SA) in Arabic. In recent years, the detected sentiments and emotions expressed in tweets have received significant interest. The substantial role played by the Arab region in international politics and the global economy has urged the need to examine the sentiments and emotions in the Arabic language. Two common models are available: Machine Learning and lexicon-based approaches to address emotion classification problems. With this motivation, the current research article develops a Teaching and Learning Optimization with… More >

  • Open Access

    ARTICLE

    A Multi-Modal Deep Learning Approach for Emotion Recognition

    H. M. Shahzad1,3, Sohail Masood Bhatti1,3,*, Arfan Jaffar1,3, Muhammad Rashid2

    Intelligent Automation & Soft Computing, Vol.36, No.2, pp. 1561-1570, 2023, DOI:10.32604/iasc.2023.032525 - 05 January 2023

    Abstract In recent years, research on facial expression recognition (FER) under mask is trending. Wearing a mask for protection from Covid 19 has become a compulsion and it hides the facial expressions that is why FER under the mask is a difficult task. The prevailing unimodal techniques for facial recognition are not up to the mark in terms of good results for the masked face, however, a multimodal technique can be employed to generate better results. We proposed a multimodal methodology based on deep learning for facial recognition under a masked face using facial and vocal… More >

  • Open Access

    ARTICLE

    A Multi-Level Circulant Cross-Modal Transformer for Multimodal Speech Emotion Recognition

    Peizhu Gong1, Jin Liu1, Zhongdai Wu2, Bing Han2, Y. Ken Wang3, Huihua He4,*

    CMC-Computers, Materials & Continua, Vol.74, No.2, pp. 4203-4220, 2023, DOI:10.32604/cmc.2023.028291 - 31 October 2022

    Abstract Speech emotion recognition, as an important component of human-computer interaction technology, has received increasing attention. Recent studies have treated emotion recognition of speech signals as a multimodal task, due to its inclusion of the semantic features of two different modalities, i.e., audio and text. However, existing methods often fail in effectively represent features and capture correlations. This paper presents a multi-level circulant cross-modal Transformer (MLCCT) for multimodal speech emotion recognition. The proposed model can be divided into three steps, feature extraction, interaction and fusion. Self-supervised embedding models are introduced for feature extraction, which give a… More >

  • Open Access

    ARTICLE

    Performance Analysis of a Chunk-Based Speech Emotion Recognition Model Using RNN

    Hyun-Sam Shin1, Jun-Ki Hong2,*

    Intelligent Automation & Soft Computing, Vol.36, No.1, pp. 235-248, 2023, DOI:10.32604/iasc.2023.033082 - 29 September 2022

    Abstract Recently, artificial-intelligence-based automatic customer response system has been widely used instead of customer service representatives. Therefore, it is important for automatic customer service to promptly recognize emotions in a customer’s voice to provide the appropriate service accordingly. Therefore, we analyzed the performance of the emotion recognition (ER) accuracy as a function of the simulation time using the proposed chunk-based speech ER (CSER) model. The proposed CSER model divides voice signals into 3-s long chunks to efficiently recognize characteristically inherent emotions in the customer’s voice. We evaluated the performance of the ER of voice signal chunks More >

  • Open Access

    ARTICLE

    The Efficacy of Deep Learning-Based Mixed Model for Speech Emotion Recognition

    Mohammad Amaz Uddin1, Mohammad Salah Uddin Chowdury1, Mayeen Uddin Khandaker2,*, Nissren Tamam3, Abdelmoneim Sulieman4

    CMC-Computers, Materials & Continua, Vol.74, No.1, pp. 1709-1722, 2023, DOI:10.32604/cmc.2023.031177 - 22 September 2022

    Abstract Human speech indirectly represents the mental state or emotion of others. The use of Artificial Intelligence (AI)-based techniques may bring revolution in this modern era by recognizing emotion from speech. In this study, we introduced a robust method for emotion recognition from human speech using a well-performed preprocessing technique together with the deep learning-based mixed model consisting of Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN). About 2800 audio files were extracted from the Toronto emotional speech set (TESS) database for this study. A high pass and Savitzky Golay Filter have been used to More >

  • Open Access

    ARTICLE

    Multilayer Neural Network Based Speech Emotion Recognition for Smart Assistance

    Sandeep Kumar1, MohdAnul Haq2, Arpit Jain3, C. Andy Jason4, Nageswara Rao Moparthi1, Nitin Mittal5, Zamil S. Alzamil2,*

    CMC-Computers, Materials & Continua, Vol.74, No.1, pp. 1523-1540, 2023, DOI:10.32604/cmc.2023.028631 - 22 September 2022

    Abstract Day by day, biometric-based systems play a vital role in our daily lives. This paper proposed an intelligent assistant intended to identify emotions via voice message. A biometric system has been developed to detect human emotions based on voice recognition and control a few electronic peripherals for alert actions. This proposed smart assistant aims to provide a support to the people through buzzer and light emitting diodes (LED) alert signals and it also keep track of the places like households, hospitals and remote areas, etc. The proposed approach is able to detect seven emotions: worry,… More >

Displaying 11-20 on page 2 of 32. Per Page