Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (61)
  • Open Access

    ARTICLE

    Mixed Integer Robust Programming Model for Multimodal Fresh Agricultural Products Terminal Distribution Network Design

    Feng Yang1, Zhong Wu2,*, Xiaoyan Teng1

    CMES-Computer Modeling in Engineering & Sciences, Vol.138, No.1, pp. 719-738, 2024, DOI:10.32604/cmes.2023.028699

    Abstract The low efficiency and high cost of fresh agricultural product terminal distribution directly restrict the operation of the entire supply network. To reduce costs and optimize the distribution network, we construct a mixed integer programming model that comprehensively considers to minimize fixed, transportation, fresh-keeping, time, carbon emissions, and performance incentive costs. We analyzed the performance of traditional rider distribution and robot distribution modes in detail. In addition, the uncertainty of the actual market demand poses a huge threat to the stability of the terminal distribution network. In order to resist uncertain interference, we further extend the model to a robust… More > Graphic Abstract

    Mixed Integer Robust Programming Model for Multimodal Fresh Agricultural Products Terminal Distribution Network Design

  • Open Access

    ARTICLE

    Multi-Model Fusion Framework Using Deep Learning for Visual-Textual Sentiment Classification

    Israa K. Salman Al-Tameemi1,3, Mohammad-Reza Feizi-Derakhshi1,*, Saeed Pashazadeh2, Mohammad Asadpour2

    CMC-Computers, Materials & Continua, Vol.76, No.2, pp. 2145-2177, 2023, DOI:10.32604/cmc.2023.040997

    Abstract Multimodal Sentiment Analysis (SA) is gaining popularity due to its broad application potential. The existing studies have focused on the SA of single modalities, such as texts or photos, posing challenges in effectively handling social media data with multiple modalities. Moreover, most multimodal research has concentrated on merely combining the two modalities rather than exploring their complex correlations, leading to unsatisfactory sentiment classification results. Motivated by this, we propose a new visual-textual sentiment classification model named Multi-Model Fusion (MMF), which uses a mixed fusion framework for SA to effectively capture the essential information and the intrinsic relationship between the visual… More >

  • Open Access

    ARTICLE

    A Method of Multimodal Emotion Recognition in Video Learning Based on Knowledge Enhancement

    Hanmin Ye1,2, Yinghui Zhou1, Xiaomei Tao3,*

    Computer Systems Science and Engineering, Vol.47, No.2, pp. 1709-1732, 2023, DOI:10.32604/csse.2023.039186

    Abstract With the popularity of online learning and due to the significant influence of emotion on the learning effect, more and more researches focus on emotion recognition in online learning. Most of the current research uses the comments of the learning platform or the learner’s expression for emotion recognition. The research data on other modalities are scarce. Most of the studies also ignore the impact of instructional videos on learners and the guidance of knowledge on data. Because of the need for other modal research data, we construct a synchronous multimodal data set for analyzing learners’ emotional states in online learning… More >

  • Open Access

    ARTICLE

    Leveraging Vision-Language Pre-Trained Model and Contrastive Learning for Enhanced Multimodal Sentiment Analysis

    Jieyu An1,*, Wan Mohd Nazmee Wan Zainon1, Binfen Ding2

    Intelligent Automation & Soft Computing, Vol.37, No.2, pp. 1673-1689, 2023, DOI:10.32604/iasc.2023.039763

    Abstract Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes, such as text and image, to accurately assess sentiment. However, conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities. This limitation is attributed to their training on unimodal data, and necessitates the use of complex fusion mechanisms for sentiment analysis. In this study, we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method. Our approach harnesses the power of transfer learning… More >

  • Open Access

    ARTICLE

    Multimodal Sentiment Analysis Using BiGRU and Attention-Based Hybrid Fusion Strategy

    Zhizhong Liu*, Bin Zhou, Lingqiang Meng, Guangyu Huang

    Intelligent Automation & Soft Computing, Vol.37, No.2, pp. 1963-1981, 2023, DOI:10.32604/iasc.2023.038835

    Abstract Recently, multimodal sentiment analysis has increasingly attracted attention with the popularity of complementary data streams, which has great potential to surpass unimodal sentiment analysis. One challenge of multimodal sentiment analysis is how to design an efficient multimodal feature fusion strategy. Unfortunately, existing work always considers feature-level fusion or decision-level fusion, and few research works focus on hybrid fusion strategies that contain feature-level fusion and decision-level fusion. To improve the performance of multimodal sentiment analysis, we present a novel multimodal sentiment analysis model using BiGRU and attention-based hybrid fusion strategy (BAHFS). Firstly, we apply BiGRU to learn the unimodal features of… More >

  • Open Access

    ARTICLE

    Predictive Multimodal Deep Learning-Based Sustainable Renewable and Non-Renewable Energy Utilization

    Abdelwahed Motwakel1,*, Marwa Obayya2, Nadhem Nemri3, Khaled Tarmissi4, Heba Mohsen5, Mohammed Rizwanulla6, Ishfaq Yaseen6, Abu Sarwar Zamani6

    Computer Systems Science and Engineering, Vol.47, No.1, pp. 1267-1281, 2023, DOI:10.32604/csse.2023.037735

    Abstract Recently, renewable energy (RE) has become popular due to its benefits, such as being inexpensive, low-carbon, ecologically friendly, steady, and reliable. The RE sources are gradually combined with non-renewable energy (NRE) sources into electric grids to satisfy energy demands. Since energy utilization is highly related to national energy policy, energy prediction using artificial intelligence (AI) and deep learning (DL) based models can be employed for energy prediction on RE and NRE power resources. Predicting energy consumption of RE and NRE sources using effective models becomes necessary. With this motivation, this study presents a new multimodal fusion-based predictive tool for energy… More >

  • Open Access

    ARTICLE

    Leveraging Multimodal Ensemble Fusion-Based Deep Learning for COVID-19 on Chest Radiographs

    Mohamed Yacin Sikkandar1,*, K. Hemalatha2, M. Subashree3, S. Srinivasan4, Seifedine Kadry5,6,7, Jungeun Kim8, Keejun Han9

    Computer Systems Science and Engineering, Vol.47, No.1, pp. 873-889, 2023, DOI:10.32604/csse.2023.035730

    Abstract Recently, COVID-19 has posed a challenging threat to researchers, scientists, healthcare professionals, and administrations over the globe, from its diagnosis to its treatment. The researchers are making persistent efforts to derive probable solutions for managing the pandemic in their areas. One of the widespread and effective ways to detect COVID-19 is to utilize radiological images comprising X-rays and computed tomography (CT) scans. At the same time, the recent advances in machine learning (ML) and deep learning (DL) models show promising results in medical imaging. Particularly, the convolutional neural network (CNN) model can be applied to identifying abnormalities on chest radiographs.… More >

  • Open Access

    ARTICLE

    Improving Targeted Multimodal Sentiment Classification with Semantic Description of Images

    Jieyu An*, Wan Mohd Nazmee Wan Zainon, Zhang Hao

    CMC-Computers, Materials & Continua, Vol.75, No.3, pp. 5801-5815, 2023, DOI:10.32604/cmc.2023.038220

    Abstract Targeted multimodal sentiment classification (TMSC) aims to identify the sentiment polarity of a target mentioned in a multimodal post. The majority of current studies on this task focus on mapping the image and the text to a high-dimensional space in order to obtain and fuse implicit representations, ignoring the rich semantic information contained in the images and not taking into account the contribution of the visual modality in the multimodal fusion representation, which can potentially influence the results of TMSC tasks. This paper proposes a general model for Improving Targeted Multimodal Sentiment Classification with Semantic Description of Images (ITMSC) as… More >

  • Open Access

    ARTICLE

    MFF-Net: Multimodal Feature Fusion Network for 3D Object Detection

    Peicheng Shi1,*, Zhiqiang Liu1, Heng Qi1, Aixi Yang2

    CMC-Computers, Materials & Continua, Vol.75, No.3, pp. 5615-5637, 2023, DOI:10.32604/cmc.2023.037794

    Abstract In complex traffic environment scenarios, it is very important for autonomous vehicles to accurately perceive the dynamic information of other vehicles around the vehicle in advance. The accuracy of 3D object detection will be affected by problems such as illumination changes, object occlusion, and object detection distance. To this purpose, we face these challenges by proposing a multimodal feature fusion network for 3D object detection (MFF-Net). In this research, this paper first uses the spatial transformation projection algorithm to map the image features into the feature space, so that the image features are in the same spatial dimension when fused… More >

  • Open Access

    ARTICLE

    Fake News Detection Based on Multimodal Inputs

    Zhiping Liang*

    CMC-Computers, Materials & Continua, Vol.75, No.2, pp. 4519-4534, 2023, DOI:10.32604/cmc.2023.037035

    Abstract In view of the various adverse effects, fake news detection has become an extremely important task. So far, many detection methods have been proposed, but these methods still have some limitations. For example, only two independently encoded unimodal information are concatenated together, but not integrated with multimodal information to complete the complementary information, and to obtain the correlated information in the news content. This simple fusion approach may lead to the omission of some information and bring some interference to the model. To solve the above problems, this paper proposes the Fake News Detection model based on BLIP (FNDB). First,… More >

Displaying 11-20 on page 2 of 61. Per Page