Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (8)
  • Open Access

    ARTICLE

    CASBA: Capability-Adaptive Shadow Backdoor Attack against Federated Learning

    Hongwei Wu*, Guojian Li, Hanyun Zhang, Zi Ye, Chao Ma

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.071008 - 12 January 2026

    Abstract Federated Learning (FL) protects data privacy through a distributed training mechanism, yet its decentralized nature also introduces new security vulnerabilities. Backdoor attacks inject malicious triggers into the global model through compromised updates, posing significant threats to model integrity and becoming a key focus in FL security. Existing backdoor attack methods typically embed triggers directly into original images and consider only data heterogeneity, resulting in limited stealth and adaptability. To address the heterogeneity of malicious client devices, this paper proposes a novel backdoor attack method named Capability-Adaptive Shadow Backdoor Attack (CASBA). By incorporating measurements of clients’… More >

  • Open Access

    ARTICLE

    How Robust Are Language Models against Backdoors in Federated Learning?

    Seunghan Kim1,#, Changhoon Lim2,#, Gwonsang Ryu3, Hyunil Kim2,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.2, pp. 2617-2630, 2025, DOI:10.32604/cmes.2025.071190 - 26 November 2025

    Abstract Federated Learning enables privacy-preserving training of Transformer-based language models, but remains vulnerable to backdoor attacks that compromise model reliability. This paper presents a comparative analysis of defense strategies against both classical and advanced backdoor attacks, evaluated across autoencoding and autoregressive models. Unlike prior studies, this work provides the first systematic comparison of perturbation-based, screening-based, and hybrid defenses in Transformer-based FL environments. Our results show that screening-based defenses consistently outperform perturbation-based ones, effectively neutralizing most attacks across architectures. However, this robustness comes with significant computational overhead, revealing a clear trade-off between security and efficiency. By explicitly More >

  • Open Access

    ARTICLE

    Proactive Disentangled Modeling of Trigger–Object Pairings for Backdoor Defense

    Kyle Stein1,*, Andrew A. Mahyari1,2, Guillermo Francia III3, Eman El-Sheikh3

    CMC-Computers, Materials & Continua, Vol.85, No.1, pp. 1001-1018, 2025, DOI:10.32604/cmc.2025.068201 - 29 August 2025

    Abstract Deep neural networks (DNNs) and generative AI (GenAI) are increasingly vulnerable to backdoor attacks, where adversaries embed triggers into inputs to cause models to misclassify or misinterpret target labels. Beyond traditional single-trigger scenarios, attackers may inject multiple triggers across various object classes, forming unseen backdoor-object configurations that evade standard detection pipelines. In this paper, we introduce DBOM (Disentangled Backdoor-Object Modeling), a proactive framework that leverages structured disentanglement to identify and neutralize both seen and unseen backdoor threats at the dataset level. Specifically, DBOM factorizes input image representations by modeling triggers and objects as independent primitives in the… More >

  • Open Access

    ARTICLE

    Defending against Backdoor Attacks in Federated Learning by Using Differential Privacy and OOD Data Attributes

    Qingyu Tan, Yan Li, Byeong-Seok Shin*

    CMES-Computer Modeling in Engineering & Sciences, Vol.143, No.2, pp. 2417-2428, 2025, DOI:10.32604/cmes.2025.063811 - 30 May 2025

    Abstract Federated Learning (FL), a practical solution that leverages distributed data across devices without the need for centralized data storage, which enables multiple participants to jointly train models while preserving data privacy and avoiding direct data sharing. Despite its privacy-preserving advantages, FL remains vulnerable to backdoor attacks, where malicious participants introduce backdoors into local models that are then propagated to the global model through the aggregation process. While existing differential privacy defenses have demonstrated effectiveness against backdoor attacks in FL, they often incur a significant degradation in the performance of the aggregated models on benign tasks.… More >

  • Open Access

    ARTICLE

    A Gaussian Noise-Based Algorithm for Enhancing Backdoor Attacks

    Hong Huang, Yunfei Wang*, Guotao Yuan, Xin Li

    CMC-Computers, Materials & Continua, Vol.80, No.1, pp. 361-387, 2024, DOI:10.32604/cmc.2024.051633 - 18 July 2024

    Abstract Deep Neural Networks (DNNs) are integral to various aspects of modern life, enhancing work efficiency. Nonetheless, their susceptibility to diverse attack methods, including backdoor attacks, raises security concerns. We aim to investigate backdoor attack methods for image categorization tasks, to promote the development of DNN towards higher security. Research on backdoor attacks currently faces significant challenges due to the distinct and abnormal data patterns of malicious samples, and the meticulous data screening by developers, hindering practical attack implementation. To overcome these challenges, this study proposes a Gaussian Noise-Targeted Universal Adversarial Perturbation (GN-TUAP) algorithm. This approach… More >

  • Open Access

    ARTICLE

    Adaptive Backdoor Attack against Deep Neural Networks

    Honglu He, Zhiying Zhu, Xinpeng Zhang*

    CMES-Computer Modeling in Engineering & Sciences, Vol.136, No.3, pp. 2617-2633, 2023, DOI:10.32604/cmes.2023.025923 - 09 March 2023

    Abstract In recent years, the number of parameters of deep neural networks (DNNs) has been increasing rapidly. The training of DNNs is typically computation-intensive. As a result, many users leverage cloud computing and outsource their training procedures. Outsourcing computation results in a potential risk called backdoor attack, in which a welltrained DNN would perform abnormally on inputs with a certain trigger. Backdoor attacks can also be classified as attacks that exploit fake images. However, most backdoor attacks design a uniform trigger for all images, which can be easily detected and removed. In this paper, we propose… More >

  • Open Access

    ARTICLE

    Byte-Level Function-Associated Method for Malware Detection

    Jingwei Hao*, Senlin Luo, Limin Pan

    Computer Systems Science and Engineering, Vol.46, No.1, pp. 719-734, 2023, DOI:10.32604/csse.2023.033923 - 20 January 2023

    Abstract The byte stream is widely used in malware detection due to its independence of reverse engineering. However, existing methods based on the byte stream implement an indiscriminate feature extraction strategy, which ignores the byte function difference in different segments and fails to achieve targeted feature extraction for various byte semantic representation modes, resulting in byte semantic confusion. To address this issue, an enhanced adversarial byte function associated method for malware backdoor attack is proposed in this paper by categorizing various function bytes into three functions involving structure, code, and data. The Minhash algorithm, grayscale mapping, More >

  • Open Access

    ARTICLE

    An Improved Optimized Model for Invisible Backdoor Attack Creation Using Steganography

    Daniyal M. Alghazzawi1, Osama Bassam J. Rabie1, Surbhi Bhatia2, Syed Hamid Hasan1,*

    CMC-Computers, Materials & Continua, Vol.72, No.1, pp. 1173-1193, 2022, DOI:10.32604/cmc.2022.022748 - 24 February 2022

    Abstract The Deep Neural Networks (DNN) training process is widely affected by backdoor attacks. The backdoor attack is excellent at concealing its identity in the DNN by performing well on regular samples and displaying malicious behavior with data poisoning triggers. The state-of-art backdoor attacks mainly follow a certain assumption that the trigger is sample-agnostic and different poisoned samples use the same trigger. To overcome this problem, in this work we are creating a backdoor attack to check their strength to withstand complex defense strategies, and in order to achieve this objective, we are developing an improved… More >

Displaying 1-10 on page 1 of 8. Per Page