Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (64)
  • Open Access

    ARTICLE

    The Effect of Sleep and Cognition Enhancement Multimodal Intervention for Mild Cognitive Impairment with Sleep Disturbance in the Community-Dwelling Elderly

    Eun Kyoung Han, Hae Kyoung Son*

    International Journal of Mental Health Promotion, Vol.25, No.11, pp. 1197-1208, 2023, DOI:10.32604/ijmhp.2023.041560

    Abstract Dementia prevalence has soared due to population aging. In Mild Cognitive Impairment (MCI) as a pre-dementia stage, sleep disturbances have raised much interest as a factor in a bidirectional relationship with cognitive decline. Thus, this study developed the Sleep and Cognition Enhancement Multimodal Intervention (SCEMI) based on Lazarus’ multimodal approach and conducted a randomized controlled experiment to investigate the effects of the novel program on sleep and cognition in MCI elderly. The participants were 55 MCI elderly with sleep disturbances at two dementia care centers located in S-city, Gyeonggi-do, South Korea (n = 25 in the experimental group and n… More >

  • Open Access

    ARTICLE

    Clinical Knowledge-Based Hybrid Swin Transformer for Brain Tumor Segmentation

    Xiaoliang Lei1, Xiaosheng Yu2,*, Hao Wu3, Chengdong Wu2,*, Jingsi Zhang2

    CMC-Computers, Materials & Continua, Vol.76, No.3, pp. 3797-3811, 2023, DOI:10.32604/cmc.2023.042069

    Abstract Accurate tumor segmentation from brain tissues in Magnetic Resonance Imaging (MRI) imaging is crucial in the pre-surgical planning of brain tumor malignancy. MRI images’ heterogeneous intensity and fuzzy boundaries make brain tumor segmentation challenging. Furthermore, recent studies have yet to fully employ MRI sequences’ considerable and supplementary information, which offers critical a priori knowledge. This paper proposes a clinical knowledge-based hybrid Swin Transformer multimodal brain tumor segmentation algorithm based on how experts identify malignancies from MRI images. During the encoder phase, a dual backbone network with a Swin Transformer backbone to capture long dependencies from 3D MR images and a… More >

  • Open Access

    ARTICLE

    Text Augmentation-Based Model for Emotion Recognition Using Transformers

    Fida Mohammad1,*, Mukhtaj Khan1, Safdar Nawaz Khan Marwat2, Naveed Jan3, Neelam Gohar4, Muhammad Bilal3, Amal Al-Rasheed5

    CMC-Computers, Materials & Continua, Vol.76, No.3, pp. 3523-3547, 2023, DOI:10.32604/cmc.2023.040202

    Abstract Emotion Recognition in Conversations (ERC) is fundamental in creating emotionally intelligent machines. Graph-Based Network (GBN) models have gained popularity in detecting conversational contexts for ERC tasks. However, their limited ability to collect and acquire contextual information hinders their effectiveness. We propose a Text Augmentation-based computational model for recognizing emotions using transformers (TA-MERT) to address this. The proposed model uses the Multimodal Emotion Lines Dataset (MELD), which ensures a balanced representation for recognizing human emotions. The model used text augmentation techniques to produce more training data, improving the proposed model’s accuracy. Transformer encoders train the deep neural network (DNN) model, especially… More >

  • Open Access

    ARTICLE

    Mixed Integer Robust Programming Model for Multimodal Fresh Agricultural Products Terminal Distribution Network Design

    Feng Yang1, Zhong Wu2,*, Xiaoyan Teng1

    CMES-Computer Modeling in Engineering & Sciences, Vol.138, No.1, pp. 719-738, 2024, DOI:10.32604/cmes.2023.028699

    Abstract The low efficiency and high cost of fresh agricultural product terminal distribution directly restrict the operation of the entire supply network. To reduce costs and optimize the distribution network, we construct a mixed integer programming model that comprehensively considers to minimize fixed, transportation, fresh-keeping, time, carbon emissions, and performance incentive costs. We analyzed the performance of traditional rider distribution and robot distribution modes in detail. In addition, the uncertainty of the actual market demand poses a huge threat to the stability of the terminal distribution network. In order to resist uncertain interference, we further extend the model to a robust… More > Graphic Abstract

    Mixed Integer Robust Programming Model for Multimodal Fresh Agricultural Products Terminal Distribution Network Design

  • Open Access

    ARTICLE

    Multi-Model Fusion Framework Using Deep Learning for Visual-Textual Sentiment Classification

    Israa K. Salman Al-Tameemi1,3, Mohammad-Reza Feizi-Derakhshi1,*, Saeed Pashazadeh2, Mohammad Asadpour2

    CMC-Computers, Materials & Continua, Vol.76, No.2, pp. 2145-2177, 2023, DOI:10.32604/cmc.2023.040997

    Abstract Multimodal Sentiment Analysis (SA) is gaining popularity due to its broad application potential. The existing studies have focused on the SA of single modalities, such as texts or photos, posing challenges in effectively handling social media data with multiple modalities. Moreover, most multimodal research has concentrated on merely combining the two modalities rather than exploring their complex correlations, leading to unsatisfactory sentiment classification results. Motivated by this, we propose a new visual-textual sentiment classification model named Multi-Model Fusion (MMF), which uses a mixed fusion framework for SA to effectively capture the essential information and the intrinsic relationship between the visual… More >

  • Open Access

    ARTICLE

    A Method of Multimodal Emotion Recognition in Video Learning Based on Knowledge Enhancement

    Hanmin Ye1,2, Yinghui Zhou1, Xiaomei Tao3,*

    Computer Systems Science and Engineering, Vol.47, No.2, pp. 1709-1732, 2023, DOI:10.32604/csse.2023.039186

    Abstract With the popularity of online learning and due to the significant influence of emotion on the learning effect, more and more researches focus on emotion recognition in online learning. Most of the current research uses the comments of the learning platform or the learner’s expression for emotion recognition. The research data on other modalities are scarce. Most of the studies also ignore the impact of instructional videos on learners and the guidance of knowledge on data. Because of the need for other modal research data, we construct a synchronous multimodal data set for analyzing learners’ emotional states in online learning… More >

  • Open Access

    ARTICLE

    Leveraging Vision-Language Pre-Trained Model and Contrastive Learning for Enhanced Multimodal Sentiment Analysis

    Jieyu An1,*, Wan Mohd Nazmee Wan Zainon1, Binfen Ding2

    Intelligent Automation & Soft Computing, Vol.37, No.2, pp. 1673-1689, 2023, DOI:10.32604/iasc.2023.039763

    Abstract Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes, such as text and image, to accurately assess sentiment. However, conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities. This limitation is attributed to their training on unimodal data, and necessitates the use of complex fusion mechanisms for sentiment analysis. In this study, we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method. Our approach harnesses the power of transfer learning… More >

  • Open Access

    ARTICLE

    Multimodal Sentiment Analysis Using BiGRU and Attention-Based Hybrid Fusion Strategy

    Zhizhong Liu*, Bin Zhou, Lingqiang Meng, Guangyu Huang

    Intelligent Automation & Soft Computing, Vol.37, No.2, pp. 1963-1981, 2023, DOI:10.32604/iasc.2023.038835

    Abstract Recently, multimodal sentiment analysis has increasingly attracted attention with the popularity of complementary data streams, which has great potential to surpass unimodal sentiment analysis. One challenge of multimodal sentiment analysis is how to design an efficient multimodal feature fusion strategy. Unfortunately, existing work always considers feature-level fusion or decision-level fusion, and few research works focus on hybrid fusion strategies that contain feature-level fusion and decision-level fusion. To improve the performance of multimodal sentiment analysis, we present a novel multimodal sentiment analysis model using BiGRU and attention-based hybrid fusion strategy (BAHFS). Firstly, we apply BiGRU to learn the unimodal features of… More >

  • Open Access

    ARTICLE

    Predictive Multimodal Deep Learning-Based Sustainable Renewable and Non-Renewable Energy Utilization

    Abdelwahed Motwakel1,*, Marwa Obayya2, Nadhem Nemri3, Khaled Tarmissi4, Heba Mohsen5, Mohammed Rizwanulla6, Ishfaq Yaseen6, Abu Sarwar Zamani6

    Computer Systems Science and Engineering, Vol.47, No.1, pp. 1267-1281, 2023, DOI:10.32604/csse.2023.037735

    Abstract Recently, renewable energy (RE) has become popular due to its benefits, such as being inexpensive, low-carbon, ecologically friendly, steady, and reliable. The RE sources are gradually combined with non-renewable energy (NRE) sources into electric grids to satisfy energy demands. Since energy utilization is highly related to national energy policy, energy prediction using artificial intelligence (AI) and deep learning (DL) based models can be employed for energy prediction on RE and NRE power resources. Predicting energy consumption of RE and NRE sources using effective models becomes necessary. With this motivation, this study presents a new multimodal fusion-based predictive tool for energy… More >

  • Open Access

    ARTICLE

    Leveraging Multimodal Ensemble Fusion-Based Deep Learning for COVID-19 on Chest Radiographs

    Mohamed Yacin Sikkandar1,*, K. Hemalatha2, M. Subashree3, S. Srinivasan4, Seifedine Kadry5,6,7, Jungeun Kim8, Keejun Han9

    Computer Systems Science and Engineering, Vol.47, No.1, pp. 873-889, 2023, DOI:10.32604/csse.2023.035730

    Abstract Recently, COVID-19 has posed a challenging threat to researchers, scientists, healthcare professionals, and administrations over the globe, from its diagnosis to its treatment. The researchers are making persistent efforts to derive probable solutions for managing the pandemic in their areas. One of the widespread and effective ways to detect COVID-19 is to utilize radiological images comprising X-rays and computed tomography (CT) scans. At the same time, the recent advances in machine learning (ML) and deep learning (DL) models show promising results in medical imaging. Particularly, the convolutional neural network (CNN) model can be applied to identifying abnormalities on chest radiographs.… More >

Displaying 11-20 on page 2 of 64. Per Page