Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (61)
  • Open Access

    ARTICLE

    Pre-Impact and Impact Fall Detection Based on a Multimodal Sensor Using a Deep Residual Network

    Narit Hnoohom1, Sakorn Mekruksavanich2, Anuchit Jitpattanakul3,4,*

    Intelligent Automation & Soft Computing, Vol.36, No.3, pp. 3371-3385, 2023, DOI:10.32604/iasc.2023.036551

    Abstract Falls are the contributing factor to both fatal and nonfatal injuries in the elderly. Therefore, pre-impact fall detection, which identifies a fall before the body collides with the floor, would be essential. Recently, researchers have turned their attention from post-impact fall detection to pre-impact fall detection. Pre-impact fall detection solutions typically use either a threshold-based or machine learning-based approach, although the threshold value would be difficult to accurately determine in threshold-based methods. Moreover, while additional features could sometimes assist in categorizing falls and non-falls more precisely, the estimated determination of the significant features would be too time-intensive, thus using a… More >

  • Open Access

    ARTICLE

    Solving Geometry Problems via Feature Learning and Contrastive Learning of Multimodal Data

    Pengpeng Jian1, Fucheng Guo1,*, Yanli Wang2, Yang Li1

    CMES-Computer Modeling in Engineering & Sciences, Vol.136, No.2, pp. 1707-1728, 2023, DOI:10.32604/cmes.2023.023243

    Abstract This paper presents an end-to-end deep learning method to solve geometry problems via feature learning and contrastive learning of multimodal data. A key challenge in solving geometry problems using deep learning is to automatically adapt to the task of understanding single-modal and multimodal problems. Existing methods either focus on single-modal or multimodal problems, and they cannot fit each other. A general geometry problem solver should obviously be able to process various modal problems at the same time. In this paper, a shared feature-learning model of multimodal data is adopted to learn the unified feature representation of text and image, which… More >

  • Open Access

    ARTICLE

    Multimodal Fused Deep Learning Networks for Domain Specific Image Similarity Search

    Umer Waqas, Jesse Wiebe Visser, Hana Choe, Donghun Lee*

    CMC-Computers, Materials & Continua, Vol.75, No.1, pp. 243-258, 2023, DOI:10.32604/cmc.2023.035716

    Abstract The exponential increase in data over the past few years, particularly in images, has led to more complex content since visual representation became the new norm. E-commerce and similar platforms maintain large image catalogues of their products. In image databases, searching and retrieving similar images is still a challenge, even though several image retrieval techniques have been proposed over the decade. Most of these techniques work well when querying general image databases. However, they often fail in domain-specific image databases, especially for datasets with low intraclass variance. This paper proposes a domain-specific image similarity search engine based on a fused… More >

  • Open Access

    ARTICLE

    Multimodal Spatiotemporal Feature Map for Dynamic Gesture Recognition

    Xiaorui Zhang1,2,3,*, Xianglong Zeng1, Wei Sun3,4, Yongjun Ren1,2,3, Tong Xu5

    Computer Systems Science and Engineering, Vol.46, No.1, pp. 671-686, 2023, DOI:10.32604/csse.2023.035119

    Abstract Gesture recognition technology enables machines to read human gestures and has significant application prospects in the fields of human-computer interaction and sign language translation. Existing researches usually use convolutional neural networks to extract features directly from raw gesture data for gesture recognition, but the networks are affected by much interference information in the input data and thus fit to some unimportant features. In this paper, we proposed a novel method for encoding spatio-temporal information, which can enhance the key features required for gesture recognition, such as shape, structure, contour, position and hand motion of gestures, thereby improving the accuracy of… More >

  • Open Access

    ARTICLE

    Novel Multimodal Biometric Feature Extraction for Precise Human Identification

    J. Vasavi1, M. S. Abirami2,*

    Intelligent Automation & Soft Computing, Vol.36, No.2, pp. 1349-1363, 2023, DOI:10.32604/iasc.2023.032604

    Abstract In recent years, biometric sensors are applicable for identifying important individual information and accessing the control using various identifiers by including the characteristics like a fingerprint, palm print, iris recognition, and so on. However, the precise identification of human features is still physically challenging in humans during their lifetime resulting in a variance in their appearance or features. In response to these challenges, a novel Multimodal Biometric Feature Extraction (MBFE) model is proposed to extract the features from the noisy sensor data using a modified Ranking-based Deep Convolution Neural Network (RDCNN). The proposed MBFE model enables the feature extraction from… More >

  • Open Access

    ARTICLE

    The Neurosurgical Challenge of Primary Central Nervous System Lymphoma Diagnosis: A Multimodal Intraoperative Imaging Approach to Overcome Frameless Neuronavigated Biopsy Sampling Errors

    Roberto Altieri1,2,*, Francesco Certo1, Marco Garozzo1, Giacomo Cammarata1, Massimiliano Maione1, Giuseppa Fiumanò3, Giuseppe Broggi4, Giada Maria Vecchio4, Rosario Caltabiano4, Gaetano Magro4, Giuseppe Barbagallo1

    Oncologie, Vol.24, No.4, pp. 693-706, 2022, DOI:10.32604/oncologie.2022.025393

    Abstract Background: Intracranial lymphoma remains a challenging differential diagnosis in daily neurosurgical practice. We analyzed our early experience with a surgical series of frameless neuronavigated biopsies in Primary CNS Lymphomas (PCNSLs), highlighting the importance of using an intraoperative combined imaging protocol (5-ALA fluorescence, i-CT and 11C-MET-PET) to overcome potential targeting errors secondary to tumor volume reduction after corticosteroid therapy. Materials and Methods: All patients treated for PCNLSs at our center in a 24-month period (1/1/2019 to 31/12/2020) were analyzed. Our cohort included 6 patients (4 males), with a median age of 67 years (59–82). A total of 45 samples were evaluated… More >

  • Open Access

    ARTICLE

    Brain Tumor Segmentation in Multimodal MRI Using U-Net Layered Structure

    Muhammad Javaid Iqbal1, Muhammad Waseem Iqbal2, Muhammad Anwar3,*, Muhammad Murad Khan4, Abd Jabar Nazimi5, Mohammad Nazir Ahmad6

    CMC-Computers, Materials & Continua, Vol.74, No.3, pp. 5267-5281, 2023, DOI:10.32604/cmc.2023.033024

    Abstract The brain tumour is the mass where some tissues become old or damaged, but they do not die or not leave their space. Mainly brain tumour masses occur due to malignant masses. These tissues must die so that new tissues are allowed to be born and take their place. Tumour segmentation is a complex and time-taking problem due to the tumour’s size, shape, and appearance variation. Manually finding such masses in the brain by analyzing Magnetic Resonance Images (MRI) is a crucial task for experts and radiologists. Radiologists could not work for large volume images simultaneously, and many errors occurred… More >

  • Open Access

    ARTICLE

    Multimodal Fuzzy Downstream Petroleum Supply Chain: A Novel Pentagonal Fuzzy Optimization

    Gul Freen1, Sajida Kousar1, Nasreen Kausar2, Dragan Pamucar3, Georgia Irina Oros4,*

    CMC-Computers, Materials & Continua, Vol.74, No.3, pp. 4861-4879, 2023, DOI:10.32604/cmc.2023.032985

    Abstract The petroleum industry has a complex, inflexible and challenging supply chain (SC) that impacts both the national economy as well as people’s daily lives with a range of services, including transportation, heating, electricity, lubricants, as well as chemicals and petrochemicals. In the petroleum industry, supply chain management presents several challenges, especially in the logistics sector, that are not found in other industries. In addition, logistical challenges contribute significantly to the cost of oil. Uncertainty regarding customer demand and supply significantly affects SC networks. Hence, SC flexibility can be maintained by addressing uncertainty. On the other hand, in the real world,… More >

  • Open Access

    ARTICLE

    3D Vehicle Detection Algorithm Based on Multimodal Decision-Level Fusion

    Peicheng Shi1,*, Heng Qi1, Zhiqiang Liu1, Aixi Yang2

    CMES-Computer Modeling in Engineering & Sciences, Vol.135, No.3, pp. 2007-2023, 2023, DOI:10.32604/cmes.2023.022304

    Abstract 3D vehicle detection based on LiDAR-camera fusion is becoming an emerging research topic in autonomous driving. The algorithm based on the Camera-LiDAR object candidate fusion method (CLOCs) is currently considered to be a more effective decision-level fusion algorithm, but it does not fully utilize the extracted features of 3D and 2D. Therefore, we proposed a 3D vehicle detection algorithm based on multimodal decision-level fusion. First, project the anchor point of the 3D detection bounding box into the 2D image, calculate the distance between 2D and 3D anchor points, and use this distance as a new fusion feature to enhance the… More > Graphic Abstract

    3D Vehicle Detection Algorithm Based on Multimodal Decision-Level Fusion

  • Open Access

    ARTICLE

    A Multi-Level Circulant Cross-Modal Transformer for Multimodal Speech Emotion Recognition

    Peizhu Gong1, Jin Liu1, Zhongdai Wu2, Bing Han2, Y. Ken Wang3, Huihua He4,*

    CMC-Computers, Materials & Continua, Vol.74, No.2, pp. 4203-4220, 2023, DOI:10.32604/cmc.2023.028291

    Abstract Speech emotion recognition, as an important component of human-computer interaction technology, has received increasing attention. Recent studies have treated emotion recognition of speech signals as a multimodal task, due to its inclusion of the semantic features of two different modalities, i.e., audio and text. However, existing methods often fail in effectively represent features and capture correlations. This paper presents a multi-level circulant cross-modal Transformer (MLCCT) for multimodal speech emotion recognition. The proposed model can be divided into three steps, feature extraction, interaction and fusion. Self-supervised embedding models are introduced for feature extraction, which give a more powerful representation of the… More >

Displaying 21-30 on page 3 of 61. Per Page