Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (80)
  • Open Access

    ARTICLE

    FedDPL: Federated Dynamic Prototype Learning for Privacy-Preserving Malware Analysis across Heterogeneous Clients

    Danping Niu1, Yuan Ping1,*, Chun Guo2, Xiaojun Wang3, Bin Hao4

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.073630 - 12 January 2026

    Abstract With the increasing complexity of malware attack techniques, traditional detection methods face significant challenges, such as privacy preservation, data heterogeneity, and lacking category information. To address these issues, we propose Federated Dynamic Prototype Learning (FedDPL) for malware classification by integrating Federated Learning with a specifically designed K-means. Under the Federated Learning framework, model training occurs locally without data sharing, effectively protecting user data privacy and preventing the leakage of sensitive information. Furthermore, to tackle the challenges of data heterogeneity and the lack of category information, FedDPL introduces a dynamic prototype learning mechanism, which adaptively adjusts the More >

  • Open Access

    REVIEW

    A Survey of Federated Learning: Advances in Architecture, Synchronization, and Security Threats

    Faisal Mahmud1, Fahim Mahmud2, Rashedur M. Rahman1,*

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.073519 - 12 January 2026

    Abstract Federated Learning (FL) has become a leading decentralized solution that enables multiple clients to train a model in a collaborative environment without directly sharing raw data, making it suitable for privacy-sensitive applications such as healthcare, finance, and smart systems. As the field continues to evolve, the research field has become more complex and scattered, covering different system designs, training methods, and privacy techniques. This survey is organized around the three core challenges: how the data is distributed, how models are synchronized, and how to defend against attacks. It provides a structured and up-to-date review of… More >

  • Open Access

    ARTICLE

    Mitigating Attribute Inference in Split Learning via Channel Pruning and Adversarial Training

    Afnan Alhindi*, Saad Al-Ahmadi, Mohamed Maher Ben Ismail

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.072625 - 12 January 2026

    Abstract Split Learning (SL) has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency. Specifically, neural networks are divided into client and server sub-networks in order to mitigate the exposure of sensitive data and reduce the overhead on client devices, thereby making SL particularly suitable for resource-constrained devices. Although SL prevents the direct transmission of raw data, it does not alleviate entirely the risk of privacy breaches. In fact, the data intermediately transmitted to the server sub-model may include patterns or information that could reveal sensitive data. Moreover,… More >

  • Open Access

    ARTICLE

    Privacy-Preserving Personnel Detection in Substations via Federated Learning with Dynamic Noise Adaptation

    Yuewei Tian1, Yang Su2, Yujia Wang1, Lisa Guo1, Xuyang Wu3,*, Lei Cao4, Fang Ren3

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.072081 - 12 January 2026

    Abstract This study addresses the risk of privacy leakage during the transmission and sharing of multimodal data in smart grid substations by proposing a three-tier privacy-preserving architecture based on asynchronous federated learning. The framework integrates blockchain technology, the InterPlanetary File System (IPFS) for distributed storage, and a dynamic differential privacy mechanism to achieve collaborative security across the storage, service, and federated coordination layers. It accommodates both multimodal data classification and object detection tasks, enabling the identification and localization of key targets and abnormal behaviors in substation scenarios while ensuring privacy protection. This effectively mitigates the single-point… More >

  • Open Access

    ARTICLE

    A Decentralized Identity Framework for Secure Federated Learning in Healthcare

    Samuel Acheme*, Glory Nosawaru Edegbe

    Journal of Cyber Security, Vol.8, pp. 1-31, 2026, DOI:10.32604/jcs.2026.073923 - 07 January 2026

    Abstract Federated learning (FL) enables collaborative model training across decentralized datasets, thus maintaining the privacy of training data. However, FL remains vulnerable to malicious actors, posing significant risks in privacy-sensitive domains like healthcare. Previous machine learning trust frameworks, while promising, often rely on resource-intensive blockchain ledgers, introducing computational overhead and metadata leakage risks. To address these limitations, this study presents a novel Decentralized Identity (DID) framework for mutual authentication that establishes verifiable trust among participants in FL without dependence on centralized authorities or high-cost blockchain ledgers. The proposed system leverages Decentralized Identifiers (DIDs) and Verifiable Credentials… More >

  • Open Access

    ARTICLE

    Blockchain-Assisted Improved Cryptographic Privacy-Preserving FL Model with Consensus Algorithm for ORAN

    Raghavendra Kulkarni1, Venkata Satya Suresh kumar Kondeti1, Binu Sudhakaran Pillai2, Surendran Rajendran3,*

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-23, 2026, DOI:10.32604/cmc.2025.069835 - 10 November 2025

    Abstract The next-generation RAN, known as Open Radio Access Network (ORAN), allows for several advantages, including cost-effectiveness, network flexibility, and interoperability. Now ORAN applications, utilising machine learning (ML) and artificial intelligence (AI) techniques, have become standard practice. The need for Federated Learning (FL) for ML model training in ORAN environments is heightened by the modularised structure of the ORAN architecture and the shortcomings of conventional ML techniques. However, the traditional plaintext model update sharing of FL in multi-BS contexts is susceptible to privacy violations such as deep-leakage gradient assaults and inference. Therefore, this research presents a… More >

  • Open Access

    ARTICLE

    A Privacy-Preserving Convolutional Neural Network Inference Framework for AIoT Applications

    Haoran Wang1, Shuhong Yang2, Kuan Shao2, Tao Xiao2, Zhenyong Zhang2,*

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-18, 2026, DOI:10.32604/cmc.2025.069404 - 10 November 2025

    Abstract With the rapid development of the Artificial Intelligence of Things (AIoT), convolutional neural networks (CNNs) have demonstrated potential and remarkable performance in AIoT applications due to their excellent performance in various inference tasks. However, the users have concerns about privacy leakage for the use of AI and the performance and efficiency of computing on resource-constrained IoT edge devices. Therefore, this paper proposes an efficient privacy-preserving CNN framework (i.e., EPPA) based on the Fully Homomorphic Encryption (FHE) scheme for AIoT application scenarios. In the plaintext domain, we verify schemes with different activation structures to determine the… More >

  • Open Access

    ARTICLE

    DPIL-Traj: Differential Privacy Trajectory Generation Framework with Imitation Learning

    Huaxiong Liao1,2, Xiangxuan Zhong2, Xueqi Chen2, Yirui Huang3, Yuwei Lin2, Jing Zhang2,*, Bruce Gu4

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-21, 2026, DOI:10.32604/cmc.2025.069208 - 10 November 2025

    Abstract The generation of synthetic trajectories has become essential in various fields for analyzing complex movement patterns. However, the use of real-world trajectory data poses significant privacy risks, such as location re-identification and correlation attacks. To address these challenges, privacy-preserving trajectory generation methods are critical for applications relying on sensitive location data. This paper introduces DPIL-Traj, an advanced framework designed to generate synthetic trajectories while achieving a superior balance between data utility and privacy preservation. Firstly, the framework incorporates Differential Privacy Clustering, which anonymizes trajectory data by applying differential privacy techniques that add noise, ensuring the… More >

  • Open Access

    ARTICLE

    Privacy-Preserving Gender-Based Customer Behavior Analytics in Retail Spaces Using Computer Vision

    Ginanjar Suwasono Adi1, Samsul Huda2,*, Griffani Megiyanto Rahmatullah3, Dodit Suprianto1, Dinda Qurrota Aini Al-Sefy3, Ivon Sandya Sari Putri4, Lalu Tri Wijaya Nata Kusuma5

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-23, 2026, DOI:10.32604/cmc.2025.068619 - 10 November 2025

    Abstract In the competitive retail industry of the digital era, data-driven insights into gender-specific customer behavior are essential. They support the optimization of store performance, layout design, product placement, and targeted marketing. However, existing computer vision solutions often rely on facial recognition to gather such insights, raising significant privacy and ethical concerns. To address these issues, this paper presents a privacy-preserving customer analytics system through two key strategies. First, we deploy a deep learning framework using YOLOv9s, trained on the RCA-TVGender dataset. Cameras are positioned perpendicular to observation areas to reduce facial visibility while maintaining accurate More >

  • Open Access

    REVIEW

    Next-Generation Lightweight Explainable AI for Cybersecurity: A Review on Transparency and Real-Time Threat Mitigation

    Khulud Salem Alshudukhi1,*, Sijjad Ali2, Mamoona Humayun3,*, Omar Alruwaili4

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 3029-3085, 2025, DOI:10.32604/cmes.2025.073705 - 23 December 2025

    Abstract Problem: The integration of Artificial Intelligence (AI) into cybersecurity, while enhancing threat detection, is hampered by the “black box” nature of complex models, eroding trust, accountability, and regulatory compliance. Explainable AI (XAI) aims to resolve this opacity but introduces a critical new vulnerability: the adversarial exploitation of model explanations themselves. Gap: Current research lacks a comprehensive synthesis of this dual role of XAI in cybersecurity—as both a tool for transparency and a potential attack vector. There is a pressing need to systematically analyze the trade-offs between interpretability and security, evaluate defense mechanisms, and outline a… More >

Displaying 1-10 on page 1 of 80. Per Page