Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (20)
  • Open Access

    ARTICLE

    Adversarial Attack-Based Robustness Evaluation for Trustworthy AI

    Eungyu Lee, Yongsoo Lee, Taejin Lee*

    Computer Systems Science and Engineering, Vol.47, No.2, pp. 1919-1935, 2023, DOI:10.32604/csse.2023.039599

    Abstract Artificial Intelligence (AI) technology has been extensively researched in various fields, including the field of malware detection. AI models must be trustworthy to introduce AI systems into critical decision-making and resource protection roles. The problem of robustness to adversarial attacks is a significant barrier to trustworthy AI. Although various adversarial attack and defense methods are actively being studied, there is a lack of research on robustness evaluation metrics that serve as standards for determining whether AI models are safe and reliable against adversarial attacks. An AI model’s robustness level cannot be evaluated by traditional evaluation indicators such as accuracy and… More >

  • Open Access

    ARTICLE

    Medical Image Fusion Based on Anisotropic Diffusion and Non-Subsampled Contourlet Transform

    Bhawna Goyal1,*, Ayush Dogra2, Rahul Khoond1, Dawa Chyophel Lepcha1, Vishal Goyal3, Steven L. Fernandes4

    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 311-327, 2023, DOI:10.32604/cmc.2023.038398

    Abstract The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion. It improves the quality of biomedical images by preserving detailed features to advance the clinical utility of medical imaging meant for the analysis and treatment of medical disorders. This study develops a novel approach to fuse multimodal medical images utilizing anisotropic diffusion (AD) and non-subsampled contourlet transform (NSCT). First, the method employs anisotropic diffusion for decomposing input images to their base and detail layers to coarsely split two features of input… More >

  • Open Access

    ARTICLE

    Enhancing the Adversarial Transferability with Channel Decomposition

    Bin Lin1, Fei Gao2, Wenli Zeng3,*, Jixin Chen4, Cong Zhang5, Qinsheng Zhu6, Yong Zhou4, Desheng Zheng4, Qian Qiu7,5, Shan Yang8

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 3075-3085, 2023, DOI:10.32604/csse.2023.034268

    Abstract The current adversarial attacks against deep learning models have achieved incredible success in the white-box scenario. However, they often exhibit weak transferability in the black-box scenario, especially when attacking those with defense mechanisms. In this work, we propose a new transfer-based black-box attack called the channel decomposition attack method (CDAM). It can attack multiple black-box models by enhancing the transferability of the adversarial examples. On the one hand, it tunes the gradient and stabilizes the update direction by decomposing the channels of the input example and calculating the aggregate gradient. On the other hand, it helps to escape from local… More >

  • Open Access

    ARTICLE

    Alpha Fusion Adversarial Attack Analysis Using Deep Learning

    Mohibullah Khan1, Ata Ullah1, Isra Naz2, Sajjad Haider1, Nz Jhanji3,*, Mohammad Shorfuzzaman4, Mehedi Masud4

    Computer Systems Science and Engineering, Vol.46, No.1, pp. 461-473, 2023, DOI:10.32604/csse.2023.029642

    Abstract The deep learning model encompasses a powerful learning ability that integrates the feature extraction, and classification method to improve accuracy. Convolutional Neural Networks (CNN) perform well in machine learning and image processing tasks like segmentation, classification, detection, identification, etc. The CNN models are still sensitive to noise and attack. The smallest change in training images as in an adversarial attack can greatly decrease the accuracy of the CNN model. This paper presents an alpha fusion attack analysis and generates defense against adversarial attacks. The proposed work is divided into three phases: firstly, an MLSTM-based CNN classification model is developed for… More >

  • Open Access

    ARTICLE

    Chained Dual-Generative Adversarial Network: A Generalized Defense Against Adversarial Attacks

    Amitoj Bir Singh1, Lalit Kumar Awasthi1, Urvashi1, Mohammad Shorfuzzaman2, Abdulmajeed Alsufyani2, Mueen Uddin3,*

    CMC-Computers, Materials & Continua, Vol.74, No.2, pp. 2541-2555, 2023, DOI:10.32604/cmc.2023.032795

    Abstract Neural networks play a significant role in the field of image classification. When an input image is modified by adversarial attacks, the changes are imperceptible to the human eye, but it still leads to misclassification of the images. Researchers have demonstrated these attacks to make production self-driving cars misclassify Stop Road signs as 45 Miles Per Hour (MPH) road signs and a turtle being misclassified as AK47. Three primary types of defense approaches exist which can safeguard against such attacks i.e., Gradient Masking, Robust Optimization, and Adversarial Example Detection. Very few approaches use Generative Adversarial Networks (GAN) for Defense against… More >

  • Open Access

    ARTICLE

    Classification of Adversarial Attacks Using Ensemble Clustering Approach

    Pongsakorn Tatongjai1, Tossapon Boongoen2,*, Natthakan Iam-On2, Nitin Naik3, Longzhi Yang4

    CMC-Computers, Materials & Continua, Vol.74, No.2, pp. 2479-2498, 2023, DOI:10.32604/cmc.2023.024858

    Abstract As more business transactions and information services have been implemented via communication networks, both personal and organization assets encounter a higher risk of attacks. To safeguard these, a perimeter defence like NIDS (network-based intrusion detection system) can be effective for known intrusions. There has been a great deal of attention within the joint community of security and data science to improve machine-learning based NIDS such that it becomes more accurate for adversarial attacks, where obfuscation techniques are applied to disguise patterns of intrusive traffics. The current research focuses on non-payload connections at the TCP (transmission control protocol) stack level that… More >

  • Open Access

    ARTICLE

    Defending Adversarial Examples by a Clipped Residual U-Net Model

    Kazim Ali1,*, Adnan N. Qureshi1, Muhammad Shahid Bhatti2, Abid Sohail2, Mohammad Hijji3

    Intelligent Automation & Soft Computing, Vol.35, No.2, pp. 2237-2256, 2023, DOI:10.32604/iasc.2023.028810

    Abstract Deep learning-based systems have succeeded in many computer vision tasks. However, it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks. These attacks can quickly spoil deep learning models, e.g., different convolutional neural networks (CNNs), used in various computer vision tasks from image classification to object detection. The adversarial examples are carefully designed by injecting a slight perturbation into the clean images. The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense, Generative Adversarial Network Defense, Deep Regret Analytic Generative Adversarial Networks Defense, Deep Denoising… More >

  • Open Access

    ARTICLE

    An Overview of Adversarial Attacks and Defenses

    Kai Chen*, Jinwei Wang, Jiawei Zhang

    Journal of Information Hiding and Privacy Protection, Vol.4, No.1, pp. 15-24, 2022, DOI:10.32604/jihpp.2022.029006

    Abstract In recent years, machine learning has become more and more popular, especially the continuous development of deep learning technology, which has brought great revolutions to many fields. In tasks such as image classification, natural language processing, information hiding, multimedia synthesis, and so on, the performance of deep learning has far exceeded the traditional algorithms. However, researchers found that although deep learning can train an accurate model through a large amount of data to complete various tasks, the model is vulnerable to the example which is modified artificially. This technology is called adversarial attacks, while the examples are called adversarial examples.… More >

  • Open Access

    ARTICLE

    A Two Stream Fusion Assisted Deep Learning Framework for Stomach Diseases Classification

    Muhammad Shahid Amin1, Jamal Hussain Shah1, Mussarat Yasmin1, Ghulam Jillani Ansari2, Muhamamd Attique Khan3, Usman Tariq4, Ye Jin Kim5, Byoungchol Chang6,*

    CMC-Computers, Materials & Continua, Vol.73, No.2, pp. 4423-4439, 2022, DOI:10.32604/cmc.2022.030432

    Abstract Due to rapid development in Artificial Intelligence (AI) and Deep Learning (DL), it is difficult to maintain the security and robustness of these techniques and algorithms due to emergence of novel term adversary sampling. Such technique is sensitive to these models. Thus, fake samples cause AI and DL model to produce diverse results. Adversarial attacks that successfully implemented in real world scenarios highlight their applicability even further. In this regard, minor modifications of input images cause “Adversarial Attacks” that altered the performance of competing attacks dramatically. Recently, such attacks and defensive strategies are gaining lot of attention by the machine… More >

  • Open Access

    ARTICLE

    Adversarial Training Against Adversarial Attacks for Machine Learning-Based Intrusion Detection Systems

    Muhammad Shahzad Haroon*, Husnain Mansoor Ali

    CMC-Computers, Materials & Continua, Vol.73, No.2, pp. 3513-3527, 2022, DOI:10.32604/cmc.2022.029858

    Abstract Intrusion detection system plays an important role in defending networks from security breaches. End-to-end machine learning-based intrusion detection systems are being used to achieve high detection accuracy. However, in case of adversarial attacks, that cause misclassification by introducing imperceptible perturbation on input samples, performance of machine learning-based intrusion detection systems is greatly affected. Though such problems have widely been discussed in image processing domain, very few studies have investigated network intrusion detection systems and proposed corresponding defence. In this paper, we attempt to fill this gap by using adversarial attacks on standard intrusion detection datasets and then using adversarial samples… More >

Displaying 1-10 on page 1 of 20. Per Page