Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (26)
  • Open Access

    ARTICLE

    Local Adaptive Gradient Variance Attack for Deep Fake Fingerprint Detection

    Chengsheng Yuan1,2, Baojie Cui1,2, Zhili Zhou3, Xinting Li4,*, Qingming Jonathan Wu5

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 899-914, 2024, DOI:10.32604/cmc.2023.045854

    Abstract In recent years, deep learning has been the mainstream technology for fingerprint liveness detection (FLD) tasks because of its remarkable performance. However, recent studies have shown that these deep fake fingerprint detection (DFFD) models are not resistant to attacks by adversarial examples, which are generated by the introduction of subtle perturbations in the fingerprint image, allowing the model to make fake judgments. Most of the existing adversarial example generation methods are based on gradient optimization, which is easy to fall into local optimal, resulting in poor transferability of adversarial attacks. In addition, the perturbation added to the blank area of… More >

  • Open Access

    ARTICLE

    Evaluating the Efficacy of Latent Variables in Mitigating Data Poisoning Attacks in the Context of Bayesian Networks: An Empirical Study

    Shahad Alzahrani1, Hatim Alsuwat2, Emad Alsuwat3,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.2, pp. 1635-1654, 2024, DOI:10.32604/cmes.2023.044718

    Abstract Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables. However, the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams. One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks, wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance. In this research paper, we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms. Our framework utilizes latent variables to quantify… More >

  • Open Access

    ARTICLE

    Enhancing Healthcare Data Security and Disease Detection Using Crossover-Based Multilayer Perceptron in Smart Healthcare Systems

    Mustufa Haider Abidi*, Hisham Alkhalefah, Mohamed K. Aboudaif

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.1, pp. 977-997, 2024, DOI:10.32604/cmes.2023.044169

    Abstract The healthcare data requires accurate disease detection analysis, real-time monitoring, and advancements to ensure proper treatment for patients. Consequently, Machine Learning methods are widely utilized in Smart Healthcare Systems (SHS) to extract valuable features from heterogeneous and high-dimensional healthcare data for predicting various diseases and monitoring patient activities. These methods are employed across different domains that are susceptible to adversarial attacks, necessitating careful consideration. Hence, this paper proposes a crossover-based Multilayer Perceptron (CMLP) model. The collected samples are pre-processed and fed into the crossover-based multilayer perceptron neural network to detect adversarial attacks on the medical records of patients. Once an… More >

  • Open Access

    ARTICLE

    An Efficient Character-Level Adversarial Attack Inspired by Textual Variations in Online Social Media Platforms

    Jebran Khan1, Kashif Ahmad2, Kyung-Ah Sohn1,3,*

    Computer Systems Science and Engineering, Vol.47, No.3, pp. 2869-2894, 2023, DOI:10.32604/csse.2023.040159

    Abstract In recent years, the growing popularity of social media platforms has led to several interesting natural language processing (NLP) applications. However, these social media-based NLP applications are subject to different types of adversarial attacks due to the vulnerabilities of machine learning (ML) and NLP techniques. This work presents a new low-level adversarial attack recipe inspired by textual variations in online social media communication. These variations are generated to convey the message using out-of-vocabulary words based on visual and phonetic similarities of characters and words in the shortest possible form. The intuition of the proposed scheme is to generate adversarial examples… More >

  • Open Access

    ARTICLE

    An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments

    Weizheng Wang1,3, Xiangqi Wang2,*, Xianmin Pan1, Xingxing Gong3, Jian Liang3, Pradip Kumar Sharma4, Osama Alfarraj5, Wael Said6

    CMC-Computers, Materials & Continua, Vol.76, No.3, pp. 3859-3876, 2023, DOI:10.32604/cmc.2023.041346

    Abstract Image-denoising techniques are widely used to defend against Adversarial Examples (AEs). However, denoising alone cannot completely eliminate adversarial perturbations. The remaining perturbations tend to amplify as they propagate through deeper layers of the network, leading to misclassifications. Moreover, image denoising compromises the classification accuracy of original examples. To address these challenges in AE defense through image denoising, this paper proposes a novel AE detection technique. The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network (CNN) network structures. The used detector model integrates the classification results of different models as the input to the detector and calculates the… More >

  • Open Access

    ARTICLE

    VeriFace: Defending against Adversarial Attacks in Face Verification Systems

    Awny Sayed1, Sohair Kinlany2, Alaa Zaki2, Ahmed Mahfouz2,3,*

    CMC-Computers, Materials & Continua, Vol.76, No.3, pp. 3151-3166, 2023, DOI:10.32604/cmc.2023.040256

    Abstract Face verification systems are critical in a wide range of applications, such as security systems and biometric authentication. However, these systems are vulnerable to adversarial attacks, which can significantly compromise their accuracy and reliability. Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images. These perturbations can be imperceptible to the human eye but can cause the system to misclassify or fail to recognize the person in the image. To address this issue, we propose a novel system called VeriFace that comprises two defense mechanisms, adversarial detection, and adversarial removal. The first… More >

  • Open Access

    ARTICLE

    Adversarial Attack-Based Robustness Evaluation for Trustworthy AI

    Eungyu Lee, Yongsoo Lee, Taejin Lee*

    Computer Systems Science and Engineering, Vol.47, No.2, pp. 1919-1935, 2023, DOI:10.32604/csse.2023.039599

    Abstract Artificial Intelligence (AI) technology has been extensively researched in various fields, including the field of malware detection. AI models must be trustworthy to introduce AI systems into critical decision-making and resource protection roles. The problem of robustness to adversarial attacks is a significant barrier to trustworthy AI. Although various adversarial attack and defense methods are actively being studied, there is a lack of research on robustness evaluation metrics that serve as standards for determining whether AI models are safe and reliable against adversarial attacks. An AI model’s robustness level cannot be evaluated by traditional evaluation indicators such as accuracy and… More >

  • Open Access

    ARTICLE

    Medical Image Fusion Based on Anisotropic Diffusion and Non-Subsampled Contourlet Transform

    Bhawna Goyal1,*, Ayush Dogra2, Rahul Khoond1, Dawa Chyophel Lepcha1, Vishal Goyal3, Steven L. Fernandes4

    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 311-327, 2023, DOI:10.32604/cmc.2023.038398

    Abstract The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion. It improves the quality of biomedical images by preserving detailed features to advance the clinical utility of medical imaging meant for the analysis and treatment of medical disorders. This study develops a novel approach to fuse multimodal medical images utilizing anisotropic diffusion (AD) and non-subsampled contourlet transform (NSCT). First, the method employs anisotropic diffusion for decomposing input images to their base and detail layers to coarsely split two features of input… More >

  • Open Access

    ARTICLE

    Enhancing the Adversarial Transferability with Channel Decomposition

    Bin Lin1, Fei Gao2, Wenli Zeng3,*, Jixin Chen4, Cong Zhang5, Qinsheng Zhu6, Yong Zhou4, Desheng Zheng4, Qian Qiu7,5, Shan Yang8

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 3075-3085, 2023, DOI:10.32604/csse.2023.034268

    Abstract The current adversarial attacks against deep learning models have achieved incredible success in the white-box scenario. However, they often exhibit weak transferability in the black-box scenario, especially when attacking those with defense mechanisms. In this work, we propose a new transfer-based black-box attack called the channel decomposition attack method (CDAM). It can attack multiple black-box models by enhancing the transferability of the adversarial examples. On the one hand, it tunes the gradient and stabilizes the update direction by decomposing the channels of the input example and calculating the aggregate gradient. On the other hand, it helps to escape from local… More >

  • Open Access

    ARTICLE

    Alpha Fusion Adversarial Attack Analysis Using Deep Learning

    Mohibullah Khan1, Ata Ullah1, Isra Naz2, Sajjad Haider1, Nz Jhanji3,*, Mohammad Shorfuzzaman4, Mehedi Masud4

    Computer Systems Science and Engineering, Vol.46, No.1, pp. 461-473, 2023, DOI:10.32604/csse.2023.029642

    Abstract The deep learning model encompasses a powerful learning ability that integrates the feature extraction, and classification method to improve accuracy. Convolutional Neural Networks (CNN) perform well in machine learning and image processing tasks like segmentation, classification, detection, identification, etc. The CNN models are still sensitive to noise and attack. The smallest change in training images as in an adversarial attack can greatly decrease the accuracy of the CNN model. This paper presents an alpha fusion attack analysis and generates defense against adversarial attacks. The proposed work is divided into three phases: firstly, an MLSTM-based CNN classification model is developed for… More >

Displaying 1-10 on page 1 of 26. Per Page