Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (41)
  • Open Access

    ARTICLE

    Enhancing Adversarial Example Transferability via Regularized Constrained Feature Layer

    Xiaoyin Yi1,2, Long Chen1,3,4,*, Jiacheng Huang1, Ning Yu1, Qian Huang5

    CMC-Computers, Materials & Continua, Vol.83, No.1, pp. 157-175, 2025, DOI:10.32604/cmc.2025.059863 - 26 March 2025

    Abstract Transfer-based Adversarial Attacks (TAAs) can deceive a victim model even without prior knowledge. This is achieved by leveraging the property of adversarial examples. That is, when generated from a surrogate model, they retain their features if applied to other models due to their good transferability. However, adversarial examples often exhibit overfitting, as they are tailored to exploit the particular architecture and feature representation of source models. Consequently, when attempting black-box transfer attacks on different target models, their effectiveness is decreased. To solve this problem, this study proposes an approach based on a Regularized Constrained Feature More >

  • Open Access

    ARTICLE

    Practical Adversarial Attacks Imperceptible to Humans in Visual Recognition

    Donghyeok Park1, Sumin Yeon2, Hyeon Seo2, Seok-Jun Buu2, Suwon Lee2,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.142, No.3, pp. 2725-2737, 2025, DOI:10.32604/cmes.2025.061732 - 03 March 2025

    Abstract Recent research on adversarial attacks has primarily focused on white-box attack techniques, with limited exploration of black-box attack methods. Furthermore, in many black-box research scenarios, it is assumed that the output label and probability distribution can be observed without imposing any constraints on the number of attack attempts. Unfortunately, this disregard for the real-world practicality of attacks, particularly their potential for human detectability, has left a gap in the research landscape. Considering these limitations, our study focuses on using a similar color attack method, assuming access only to the output label, limiting the number of More >

  • Open Access

    ARTICLE

    Secure Channel Estimation Using Norm Estimation Model for 5G Next Generation Wireless Networks

    Khalil Ullah1,*, Song Jian1, Muhammad Naeem Ul Hassan1, Suliman Khan2, Mohammad Babar3,*, Arshad Ahmad4, Shafiq Ahmad5

    CMC-Computers, Materials & Continua, Vol.82, No.1, pp. 1151-1169, 2025, DOI:10.32604/cmc.2024.057328 - 03 January 2025

    Abstract The emergence of next generation networks (NextG), including 5G and beyond, is reshaping the technological landscape of cellular and mobile networks. These networks are sufficiently scaled to interconnect billions of users and devices. Researchers in academia and industry are focusing on technological advancements to achieve high-speed transmission, cell planning, and latency reduction to facilitate emerging applications such as virtual reality, the metaverse, smart cities, smart health, and autonomous vehicles. NextG continuously improves its network functionality to support these applications. Multiple input multiple output (MIMO) technology offers spectral efficiency, dependability, and overall performance in conjunction with More >

  • Open Access

    ARTICLE

    Improving Transferable Targeted Adversarial Attack for Object Detection Using RCEN Framework and Logit Loss Optimization

    Zhiyi Ding, Lei Sun*, Xiuqing Mao, Leyu Dai, Ruiyang Ding

    CMC-Computers, Materials & Continua, Vol.80, No.3, pp. 4387-4412, 2024, DOI:10.32604/cmc.2024.052196 - 12 September 2024

    Abstract Object detection finds wide application in various sectors, including autonomous driving, industry, and healthcare. Recent studies have highlighted the vulnerability of object detection models built using deep neural networks when confronted with carefully crafted adversarial examples. This not only reveals their shortcomings in defending against malicious attacks but also raises widespread concerns about the security of existing systems. Most existing adversarial attack strategies focus primarily on image classification problems, failing to fully exploit the unique characteristics of object detection models, thus resulting in widespread deficiencies in their transferability. Furthermore, previous research has predominantly concentrated on… More >

  • Open Access

    ARTICLE

    Physics-Constrained Robustness Enhancement for Tree Ensembles Applied in Smart Grid

    Zhibo Yang, Xiaohan Huang, Bingdong Wang, Bin Hu, Zhenyong Zhang*

    CMC-Computers, Materials & Continua, Vol.80, No.2, pp. 3001-3019, 2024, DOI:10.32604/cmc.2024.053369 - 15 August 2024

    Abstract With the widespread use of machine learning (ML) technology, the operational efficiency and responsiveness of power grids have been significantly enhanced, allowing smart grids to achieve high levels of automation and intelligence. However, tree ensemble models commonly used in smart grids are vulnerable to adversarial attacks, making it urgent to enhance their robustness. To address this, we propose a robustness enhancement method that incorporates physical constraints into the node-splitting decisions of tree ensembles. Our algorithm improves robustness by developing a dataset of adversarial examples that comply with physical laws, ensuring training data accurately reflects possible More >

  • Open Access

    ARTICLE

    Local Adaptive Gradient Variance Attack for Deep Fake Fingerprint Detection

    Chengsheng Yuan1,2, Baojie Cui1,2, Zhili Zhou3, Xinting Li4,*, Qingming Jonathan Wu5

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 899-914, 2024, DOI:10.32604/cmc.2023.045854 - 30 January 2024

    Abstract In recent years, deep learning has been the mainstream technology for fingerprint liveness detection (FLD) tasks because of its remarkable performance. However, recent studies have shown that these deep fake fingerprint detection (DFFD) models are not resistant to attacks by adversarial examples, which are generated by the introduction of subtle perturbations in the fingerprint image, allowing the model to make fake judgments. Most of the existing adversarial example generation methods are based on gradient optimization, which is easy to fall into local optimal, resulting in poor transferability of adversarial attacks. In addition, the perturbation added… More >

  • Open Access

    ARTICLE

    Evaluating the Efficacy of Latent Variables in Mitigating Data Poisoning Attacks in the Context of Bayesian Networks: An Empirical Study

    Shahad Alzahrani1, Hatim Alsuwat2, Emad Alsuwat3,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.2, pp. 1635-1654, 2024, DOI:10.32604/cmes.2023.044718 - 29 January 2024

    Abstract Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables. However, the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams. One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks, wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance. In this research paper, we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms. Our framework… More >

  • Open Access

    ARTICLE

    Enhancing Healthcare Data Security and Disease Detection Using Crossover-Based Multilayer Perceptron in Smart Healthcare Systems

    Mustufa Haider Abidi*, Hisham Alkhalefah, Mohamed K. Aboudaif

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.1, pp. 977-997, 2024, DOI:10.32604/cmes.2023.044169 - 30 December 2023

    Abstract The healthcare data requires accurate disease detection analysis, real-time monitoring, and advancements to ensure proper treatment for patients. Consequently, Machine Learning methods are widely utilized in Smart Healthcare Systems (SHS) to extract valuable features from heterogeneous and high-dimensional healthcare data for predicting various diseases and monitoring patient activities. These methods are employed across different domains that are susceptible to adversarial attacks, necessitating careful consideration. Hence, this paper proposes a crossover-based Multilayer Perceptron (CMLP) model. The collected samples are pre-processed and fed into the crossover-based multilayer perceptron neural network to detect adversarial attacks on the medical… More >

  • Open Access

    ARTICLE

    An Efficient Character-Level Adversarial Attack Inspired by Textual Variations in Online Social Media Platforms

    Jebran Khan1, Kashif Ahmad2, Kyung-Ah Sohn1,3,*

    Computer Systems Science and Engineering, Vol.47, No.3, pp. 2869-2894, 2023, DOI:10.32604/csse.2023.040159 - 09 November 2023

    Abstract In recent years, the growing popularity of social media platforms has led to several interesting natural language processing (NLP) applications. However, these social media-based NLP applications are subject to different types of adversarial attacks due to the vulnerabilities of machine learning (ML) and NLP techniques. This work presents a new low-level adversarial attack recipe inspired by textual variations in online social media communication. These variations are generated to convey the message using out-of-vocabulary words based on visual and phonetic similarities of characters and words in the shortest possible form. The intuition of the proposed scheme… More >

  • Open Access

    ARTICLE

    An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments

    Weizheng Wang1,3, Xiangqi Wang2,*, Xianmin Pan1, Xingxing Gong3, Jian Liang3, Pradip Kumar Sharma4, Osama Alfarraj5, Wael Said6

    CMC-Computers, Materials & Continua, Vol.76, No.3, pp. 3859-3876, 2023, DOI:10.32604/cmc.2023.041346 - 08 October 2023

    Abstract Image-denoising techniques are widely used to defend against Adversarial Examples (AEs). However, denoising alone cannot completely eliminate adversarial perturbations. The remaining perturbations tend to amplify as they propagate through deeper layers of the network, leading to misclassifications. Moreover, image denoising compromises the classification accuracy of original examples. To address these challenges in AE defense through image denoising, this paper proposes a novel AE detection technique. The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network (CNN) network structures. The used detector model integrates the classification results of different models as the input to… More >

Displaying 11-20 on page 2 of 41. Per Page