Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (16)
  • Open Access

    ARTICLE

    Classification Framework for COVID-19 Diagnosis Based on Deep CNN Models

    Walid El-Shafai1, Abeer D. Algarni2,*, Ghada M. El Banby3, Fathi E. Abd El-Samie1,2, Naglaa F. Soliman2,4

    Intelligent Automation & Soft Computing, Vol.31, No.3, pp. 1561-1575, 2022, DOI:10.32604/iasc.2022.020386

    Abstract Automated diagnosis based on medical images is a very promising trend in modern healthcare services. For the task of automated diagnosis, there should be flexibility to deal with an enormous amount of data represented in the form of medical images. In addition, efficient algorithms that could be adapted according to the nature of images should be used. The importance of automated medical diagnosis has been maximized with the evolution of COVID-19 pandemic. COVID-19 first appeared in China, Wuhan, and then it has exploded in the whole world with a very bad impact on our daily life. The third wave of… More >

  • Open Access

    ARTICLE

    Performance Comparison of Deep CNN Models for Detecting Driver’s Distraction

    Kathiravan Srinivasan1, Lalit Garg2,*, Debajit Datta3, Abdulellah A. Alaboudi4, N. Z. Jhanjhi5, Rishav Agarwal3, Anmol George Thomas1

    CMC-Computers, Materials & Continua, Vol.68, No.3, pp. 4109-4124, 2021, DOI:10.32604/cmc.2021.016736

    Abstract According to various worldwide statistics, most car accidents occur solely due to human error. The person driving a car needs to be alert, especially when travelling through high traffic volumes that permit high-speed transit since a slight distraction can cause a fatal accident. Even though semi-automated checks, such as speed detecting cameras and speed barriers, are deployed, controlling human errors is an arduous task. The key causes of driver’s distraction include drunken driving, conversing with co-passengers, fatigue, and operating gadgets while driving. If these distractions are accurately predicted, the drivers can be alerted through an alarm system. Further, this research… More >

  • Open Access

    ARTICLE

    Mixed Noise Removal by Residual Learning of Deep CNN

    Kang Yang1, Jielin Jiang1,2,*, Zhaoqing Pan1,2

    Journal of New Media, Vol.2, No.1, pp. 1-10, 2020, DOI:10.32604/jnm.2020.09356

    Abstract Due to the huge difference of noise distribution, the result of a mixture of multiple noises becomes very complicated. Under normal circumstances, the most common type of mixed noise is to add impulse noise (IN) and then white Gaussian noise (AWGN). From the reduction of cascaded IN and AWGN to the latest sparse representation, a great deal of methods has been proposed to reduce this form of mixed noise. However, when the mixed noise is very strong, most methods often produce a lot of artifacts. In order to solve the above problems, we propose a method based on residual learning… More >

  • Open Access

    ARTICLE

    ECG Classification Using Deep CNN Improved by Wavelet Transform

    Yunxiang Zhao1, Jinyong Cheng1, *, Ping Zhang1, Xueping Peng2

    CMC-Computers, Materials & Continua, Vol.64, No.3, pp. 1615-1628, 2020, DOI:10.32604/cmc.2020.09938

    Abstract Atrial fibrillation is the most common persistent form of arrhythmia. A method based on wavelet transform combined with deep convolutional neural network is applied for automatic classification of electrocardiograms. Since the ECG signal is easily inferred, the ECG signal is decomposed into 9 kinds of subsignals with different frequency scales by wavelet function, and then wavelet reconstruction is carried out after segmented filtering to eliminate the influence of noise. A 24-layer convolution neural network is used to extract the hierarchical features by convolution kernels of different sizes, and finally the softmax classifier is used to classify them. This paper applies… More >

  • Open Access

    ARTICLE

    Human Action Recognition Based on Supervised Class-Specific Dictionary Learning with Deep Convolutional Neural Network Features

    Binjie Gu1, *, Weili Xiong1, Zhonghu Bai2

    CMC-Computers, Materials & Continua, Vol.63, No.1, pp. 243-262, 2020, DOI:10.32604/cmc.2020.06898

    Abstract Human action recognition under complex environment is a challenging work. Recently, sparse representation has achieved excellent results of dealing with human action recognition problem under different conditions. The main idea of sparse representation classification is to construct a general classification scheme where the training samples of each class can be considered as the dictionary to express the query class, and the minimal reconstruction error indicates its corresponding class. However, how to learn a discriminative dictionary is still a difficult work. In this work, we make two contributions. First, we build a new and robust human action recognition framework by combining… More >

  • Open Access

    ARTICLE

    Identifying Materials of Photographic Images and Photorealistic Computer Generated Graphics Based on Deep CNNs

    Qi Cui1,2,*, Suzanne McIntosh3, Huiyu Sun3

    CMC-Computers, Materials & Continua, Vol.55, No.2, pp. 229-241, 2018, DOI:10.3970/cmc.2018.01693

    Abstract Currently, some photorealistic computer graphics are very similar to photographic images. Photorealistic computer generated graphics can be forged as photographic images, causing serious security problems. The aim of this work is to use a deep neural network to detect photographic images (PI) versus computer generated graphics (CG). In existing approaches, image feature classification is computationally intensive and fails to achieve real-time analysis. This paper presents an effective approach to automatically identify PI and CG based on deep convolutional neural networks (DCNNs). Compared with some existing methods, the proposed method achieves real-time forensic tasks by deepening the network structure. Experimental results… More >

Displaying 11-20 on page 2 of 16. Per Page