Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (18)
  • Open Access

    ARTICLE

    Practical Adversarial Attacks Imperceptible to Humans in Visual Recognition

    Donghyeok Park1, Sumin Yeon2, Hyeon Seo2, Seok-Jun Buu2, Suwon Lee2,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.142, No.3, pp. 2725-2737, 2025, DOI:10.32604/cmes.2025.061732 - 03 March 2025

    Abstract Recent research on adversarial attacks has primarily focused on white-box attack techniques, with limited exploration of black-box attack methods. Furthermore, in many black-box research scenarios, it is assumed that the output label and probability distribution can be observed without imposing any constraints on the number of attack attempts. Unfortunately, this disregard for the real-world practicality of attacks, particularly their potential for human detectability, has left a gap in the research landscape. Considering these limitations, our study focuses on using a similar color attack method, assuming access only to the output label, limiting the number of More >

  • Open Access

    ARTICLE

    Secure Channel Estimation Using Norm Estimation Model for 5G Next Generation Wireless Networks

    Khalil Ullah1,*, Song Jian1, Muhammad Naeem Ul Hassan1, Suliman Khan2, Mohammad Babar3,*, Arshad Ahmad4, Shafiq Ahmad5

    CMC-Computers, Materials & Continua, Vol.82, No.1, pp. 1151-1169, 2025, DOI:10.32604/cmc.2024.057328 - 03 January 2025

    Abstract The emergence of next generation networks (NextG), including 5G and beyond, is reshaping the technological landscape of cellular and mobile networks. These networks are sufficiently scaled to interconnect billions of users and devices. Researchers in academia and industry are focusing on technological advancements to achieve high-speed transmission, cell planning, and latency reduction to facilitate emerging applications such as virtual reality, the metaverse, smart cities, smart health, and autonomous vehicles. NextG continuously improves its network functionality to support these applications. Multiple input multiple output (MIMO) technology offers spectral efficiency, dependability, and overall performance in conjunction with More >

  • Open Access

    ARTICLE

    Local Adaptive Gradient Variance Attack for Deep Fake Fingerprint Detection

    Chengsheng Yuan1,2, Baojie Cui1,2, Zhili Zhou3, Xinting Li4,*, Qingming Jonathan Wu5

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 899-914, 2024, DOI:10.32604/cmc.2023.045854 - 30 January 2024

    Abstract In recent years, deep learning has been the mainstream technology for fingerprint liveness detection (FLD) tasks because of its remarkable performance. However, recent studies have shown that these deep fake fingerprint detection (DFFD) models are not resistant to attacks by adversarial examples, which are generated by the introduction of subtle perturbations in the fingerprint image, allowing the model to make fake judgments. Most of the existing adversarial example generation methods are based on gradient optimization, which is easy to fall into local optimal, resulting in poor transferability of adversarial attacks. In addition, the perturbation added… More >

  • Open Access

    ARTICLE

    Evaluating the Efficacy of Latent Variables in Mitigating Data Poisoning Attacks in the Context of Bayesian Networks: An Empirical Study

    Shahad Alzahrani1, Hatim Alsuwat2, Emad Alsuwat3,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.2, pp. 1635-1654, 2024, DOI:10.32604/cmes.2023.044718 - 29 January 2024

    Abstract Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables. However, the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams. One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks, wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance. In this research paper, we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms. Our framework… More >

  • Open Access

    ARTICLE

    VeriFace: Defending against Adversarial Attacks in Face Verification Systems

    Awny Sayed1, Sohair Kilany2, Alaa Zaki2, Ahmed Mahfouz2,3,*

    CMC-Computers, Materials & Continua, Vol.76, No.3, pp. 3151-3166, 2023, DOI:10.32604/cmc.2023.040256 - 08 October 2023

    Abstract Face verification systems are critical in a wide range of applications, such as security systems and biometric authentication. However, these systems are vulnerable to adversarial attacks, which can significantly compromise their accuracy and reliability. Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images. These perturbations can be imperceptible to the human eye but can cause the system to misclassify or fail to recognize the person in the image. To address this issue, we propose a novel system called VeriFace that comprises two defense mechanisms, adversarial detection,… More >

  • Open Access

    ARTICLE

    Medical Image Fusion Based on Anisotropic Diffusion and Non-Subsampled Contourlet Transform

    Bhawna Goyal1,*, Ayush Dogra2, Rahul Khoond1, Dawa Chyophel Lepcha1, Vishal Goyal3, Steven L. Fernandes4

    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 311-327, 2023, DOI:10.32604/cmc.2023.038398 - 08 June 2023

    Abstract The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion. It improves the quality of biomedical images by preserving detailed features to advance the clinical utility of medical imaging meant for the analysis and treatment of medical disorders. This study develops a novel approach to fuse multimodal medical images utilizing anisotropic diffusion (AD) and non-subsampled contourlet transform (NSCT). First, the method employs anisotropic diffusion for decomposing input images to their base and detail layers to coarsely… More >

  • Open Access

    ARTICLE

    Chained Dual-Generative Adversarial Network: A Generalized Defense Against Adversarial Attacks

    Amitoj Bir Singh1, Lalit Kumar Awasthi1, Urvashi1, Mohammad Shorfuzzaman2, Abdulmajeed Alsufyani2, Mueen Uddin3,*

    CMC-Computers, Materials & Continua, Vol.74, No.2, pp. 2541-2555, 2023, DOI:10.32604/cmc.2023.032795 - 31 October 2022

    Abstract Neural networks play a significant role in the field of image classification. When an input image is modified by adversarial attacks, the changes are imperceptible to the human eye, but it still leads to misclassification of the images. Researchers have demonstrated these attacks to make production self-driving cars misclassify Stop Road signs as 45 Miles Per Hour (MPH) road signs and a turtle being misclassified as AK47. Three primary types of defense approaches exist which can safeguard against such attacks i.e., Gradient Masking, Robust Optimization, and Adversarial Example Detection. Very few approaches use Generative Adversarial… More >

  • Open Access

    ARTICLE

    Classification of Adversarial Attacks Using Ensemble Clustering Approach

    Pongsakorn Tatongjai1, Tossapon Boongoen2,*, Natthakan Iam-On2, Nitin Naik3, Longzhi Yang4

    CMC-Computers, Materials & Continua, Vol.74, No.2, pp. 2479-2498, 2023, DOI:10.32604/cmc.2023.024858 - 31 October 2022

    Abstract As more business transactions and information services have been implemented via communication networks, both personal and organization assets encounter a higher risk of attacks. To safeguard these, a perimeter defence like NIDS (network-based intrusion detection system) can be effective for known intrusions. There has been a great deal of attention within the joint community of security and data science to improve machine-learning based NIDS such that it becomes more accurate for adversarial attacks, where obfuscation techniques are applied to disguise patterns of intrusive traffics. The current research focuses on non-payload connections at the TCP (transmission… More >

  • Open Access

    ARTICLE

    Defending Adversarial Examples by a Clipped Residual U-Net Model

    Kazim Ali1,*, Adnan N. Qureshi1, Muhammad Shahid Bhatti2, Abid Sohail2, Mohammad Hijji3

    Intelligent Automation & Soft Computing, Vol.35, No.2, pp. 2237-2256, 2023, DOI:10.32604/iasc.2023.028810 - 19 July 2022

    Abstract Deep learning-based systems have succeeded in many computer vision tasks. However, it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks. These attacks can quickly spoil deep learning models, e.g., different convolutional neural networks (CNNs), used in various computer vision tasks from image classification to object detection. The adversarial examples are carefully designed by injecting a slight perturbation into the clean images. The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense, Generative Adversarial Network Defense, Deep Regret Analytic Generative… More >

  • Open Access

    ARTICLE

    An Optimised Defensive Technique to Recognize Adversarial Iris Images Using Curvelet Transform

    K. Meenakshi1,*, G. Maragatham2

    Intelligent Automation & Soft Computing, Vol.35, No.1, pp. 627-643, 2023, DOI:10.32604/iasc.2023.026961 - 06 June 2022

    Abstract Deep Learning is one of the most popular computer science techniques, with applications in natural language processing, image processing, pattern identification, and various other fields. Despite the success of these deep learning algorithms in multiple scenarios, such as spam detection, malware detection, object detection and tracking, face recognition, and automatic driving, these algorithms and their associated training data are rather vulnerable to numerous security threats. These threats ultimately result in significant performance degradation. Moreover, the supervised based learning models are affected by manipulated data known as adversarial examples, which are images with a particular level… More >

Displaying 1-10 on page 1 of 18. Per Page