Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (6)
  • Open Access


    A More Efficient Approach for Remote Sensing Image Classification

    Huaxiang Song*

    CMC-Computers, Materials & Continua, Vol.74, No.3, pp. 5741-5756, 2023, DOI:10.32604/cmc.2023.034921

    Abstract Over the past decade, the significant growth of the convolutional neural network (CNN) based on deep learning (DL) approaches has greatly improved the machine learning (ML) algorithm’s performance on the semantic scene classification (SSC) of remote sensing images (RSI). However, the unbalanced attention to classification accuracy and efficiency has made the superiority of DL-based algorithms, e.g., automation and simplicity, partially lost. Traditional ML strategies (e.g., the handcrafted features or indicators) and accuracy-aimed strategies with a high trade-off (e.g., the multi-stage CNNs and ensemble of multi-CNNs) are widely used without any training efficiency optimization involved, which… More >

  • Open Access


    TP-MobNet: A Two-pass Mobile Network for Low-complexity Classification of Acoustic Scene

    Soonshin Seo1, Junseok Oh2, Eunsoo Cho2, Hosung Park2, Gyujin Kim2, Ji-Hwan Kim2,*

    CMC-Computers, Materials & Continua, Vol.73, No.2, pp. 3291-3303, 2022, DOI:10.32604/cmc.2022.026259

    Abstract Acoustic scene classification (ASC) is a method of recognizing and classifying environments that employ acoustic signals. Various ASC approaches based on deep learning have been developed, with convolutional neural networks (CNNs) proving to be the most reliable and commonly utilized in ASC systems due to their suitability for constructing lightweight models. When using ASC systems in the real world, model complexity and device robustness are essential considerations. In this paper, we propose a two-pass mobile network for low-complexity classification of the acoustic scene, named TP-MobNet. With inverse residuals and linear bottlenecks, TP-MobNet is based on… More >

  • Open Access


    Intelligent Deep Data Analytics Based Remote Sensing Scene Classification Model

    Ahmed Althobaiti1, Abdullah Alhumaidi Alotaibi2, Sayed Abdel-Khalek3, Suliman A. Alsuhibany4, Romany F. Mansour5,*

    CMC-Computers, Materials & Continua, Vol.72, No.1, pp. 1921-1938, 2022, DOI:10.32604/cmc.2022.025550

    Abstract Latest advancements in the integration of camera sensors paves a way for new Unmanned Aerial Vehicles (UAVs) applications such as analyzing geographical (spatial) variations of earth science in mitigating harmful environmental impacts and climate change. UAVs have achieved significant attention as a remote sensing environment, which captures high-resolution images from different scenes such as land, forest fire, flooding threats, road collision, landslides, and so on to enhance data analysis and decision making. Dynamic scene classification has attracted much attention in the examination of earth data captured by UAVs. This paper proposes a new multi-modal fusion… More >

  • Open Access


    A New Method for Scene Classification from the Remote Sensing Images

    Purnachand Kollapudi1, Saleh Alghamdi2, Neenavath Veeraiah3,*, Youseef Alotaibi4, Sushma Thotakura5, Abdulmajeed Alsufyani6

    CMC-Computers, Materials & Continua, Vol.72, No.1, pp. 1339-1355, 2022, DOI:10.32604/cmc.2022.025118

    Abstract The mission of classifying remote sensing pictures based on their contents has a range of applications in a variety of areas. In recent years, a lot of interest has been generated in researching remote sensing image scene classification. Remote sensing image scene retrieval, and scene-driven remote sensing image object identification are included in the Remote sensing image scene understanding (RSISU) research. In the last several years, the number of deep learning (DL) methods that have emerged has caused the creation of new approaches to remote sensing image classification to gain major breakthroughs, providing new research… More >

  • Open Access


    IoT-Cloud Empowered Aerial Scene Classification for Unmanned Aerial Vehicles

    K. R. Uthayan1,*, G. Lakshmi Vara Prasad2, V. Mohan3, C. Bharatiraja4, Irina V. Pustokhina5, Denis A. Pustokhin6, Vicente García Díaz7

    CMC-Computers, Materials & Continua, Vol.70, No.3, pp. 5161-5177, 2022, DOI:10.32604/cmc.2022.021300

    Abstract Recent trends in communication technologies and unmanned aerial vehicles (UAVs) find its application in several areas such as healthcare, surveillance, transportation, etc. Besides, the integration of Internet of things (IoT) with cloud computing environment offers several benefits for the UAV communication. At the same time, aerial scene classification is one of the major research areas in UAV-enabled MEC systems. In UAV aerial imagery, efficient image representation is crucial for the purpose of scene classification. The existing scene classification techniques generate mid-level image features with limited representation capabilities that often end up in producing average results.… More >

  • Open Access


    Adaptive Binary Coding for Scene Classification Based on Convolutional Networks

    Shuai Wang1, Xianyi Chen2, *

    CMC-Computers, Materials & Continua, Vol.65, No.3, pp. 2065-2077, 2020, DOI:10.32604/cmc.2020.09857

    Abstract With the rapid development of computer technology, millions of images are produced everyday by different sources. How to efficiently process these images and accurately discern the scene in them becomes an important but tough task. In this paper, we propose a novel supervised learning framework based on proposed adaptive binary coding for scene classification. Specifically, we first extract some high-level features of images under consideration based on available models trained on public datasets. Then, we further design a binary encoding method called one-hot encoding to make the feature representation more efficient. Benefiting from the proposed More >

Displaying 1-10 on page 1 of 6. Per Page