Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (147)
  • Open Access

    ARTICLE

    FedGLP-ADP: Federated Learning with Gradient-Based Layer-Wise Personalization and Adaptive Differential Privacy

    Di Xiao*, Wenting Jiang, Min Li

    CMC-Computers, Materials & Continua, Vol.87, No.3, 2026, DOI:10.32604/cmc.2026.079808 - 09 April 2026

    Abstract The rapid advancement of the Internet of Things (IoT) has transformed edge devices from simple data collectors into intelligent units capable of local processing and collaborative learning. However, the vast amounts of sensitive data generated by these devices face severe constraints from “data silos” and risks of privacy breaches. Federated learning (FL), as a distributed collaborative paradigm that avoids sharing raw data, holds great promise in the IoT domain. Nevertheless, it remains vulnerable to gradient leakage threats. While traditional differential privacy (DP) techniques mitigate privacy risks, they often come at the cost of significantly reduced… More >

  • Open Access

    ARTICLE

    Trustworthy Personalized Federated Recommender System with Blockchain-Assisted Decentralized Reward Management

    Waqar Ali1, May Altulyan2, Ghulam Farooque3, Siyuan Li4, Jie Shao4,5,*

    CMC-Computers, Materials & Continua, Vol.87, No.3, 2026, DOI:10.32604/cmc.2026.078599 - 09 April 2026

    Abstract Federated recommender systems (FedRS) enable collaborative model training while preserving user privacy, yet they remain vulnerable to adversarial attacks, unreliable client updates, and misaligned incentives in decentralized environments. Existing approaches struggle to jointly preserve personalization, robustness, and trust when user data are highly non-IID and recommendation quality is governed by ranking-oriented objectives. To address these challenges, we propose a Trustworthy Federated Recommender System (T-FedRS) that extends federated neural collaborative filtering by integrating a ranking-aware reputation mechanism and a lightweight blockchain layer for transparent incentive allocation. Personalization is preserved through locally maintained user embeddings, while item parameters… More >

  • Open Access

    ARTICLE

    Explainable Anomaly Detection for System Logs in Distributed Environments

    Zhaojun Gu1, Wenlong Yue2, Chunbo Liu1,*

    CMC-Computers, Materials & Continua, Vol.87, No.3, 2026, DOI:10.32604/cmc.2026.077388 - 09 April 2026

    Abstract Anomaly detection in system logs is a critical technical means for identifying potential faults and security risks. In distributed environments, traditional deep learning-based log anomaly detection methods often suffer from shortcomings in transparency, computational overhead, and data privacy protection. To address these issues, this paper proposes a federated learning-driven lightweight and explainable log anomaly detection framework named FedXLog. The framework adapts to heterogeneous logs through hierarchical feature extraction, introduces the Federated Gradient Trajectory Aggregation algorithm (FedGradTrace) to enhance the explainability of the parameter aggregation process, constructs lightweight models using knowledge distillation, and achieves globally consistent… More >

  • Open Access

    ARTICLE

    Federated Learning for Malicious Domain Detection via Privacy-Preserving DNS Traffic Analysis

    Samar Abbas Mangi1,*, Samina Rajper1, Noor Ahmed Shaikh1, Shehzad Ashraf Chaudhry2,3

    CMC-Computers, Materials & Continua, Vol.87, No.3, 2026, DOI:10.32604/cmc.2026.077337 - 09 April 2026

    Abstract Malicious domain detection (MDD) from DNS telemetry enables early threat hunting but is constrained by privacy and data-sharing barriers across organizations. We present a deployable federated learning (FL) pipeline that trains a compact deep neural network (DNN; 64-32-16 with ReLU and dropout 0.3) locally at each client and exchanges only masked model updates. Privacy is enforced via secure aggregation (the server observes only an aggregate of masked updates) and optional server-side differential privacy (DP) via clipping and Gaussian noise. Our feature schema combines DNS-specific lexical cues (character n-grams, entropy, TLD indicators) with lightweight behavioral signals More >

  • Open Access

    ARTICLE

    Secure and Differentially Private Edge-Cloud Federated Learning Framework for Privacy-Preserving Maritime AIS Intelligence

    Abuzar Khan1, Abid Iqbal2,*, Ghassan Husnain1,*, Fahad Masood1, Mohammed Al-Naeem3, Sajid Iqbal4

    CMC-Computers, Materials & Continua, Vol.87, No.3, 2026, DOI:10.32604/cmc.2026.077222 - 09 April 2026

    Abstract Cloud computing now supports large-scale maritime analytics, yet offloading rich Automatic Identification System (AIS) data to the cloud exposes sensitive operational patterns and complicates compliance with cross-border privacy regulations. This work addresses the gap between growing demand for AI-driven vessel intelligence and the limited availability of practical, privacy-preserving cloud solutions. We introduce a privacy-by-design edge-cloud framework in which ports and vessels serve as federated clients, training vessel-type classifiers on local AIS trajectories while transmitting only clipped, Gaussian-perturbed updates to a zero-trust cloud coordinator employing secure and robust aggregation. Using a public AIS corpus with realistic… More >

  • Open Access

    ARTICLE

    EdgeTrustX: A Privacy-Aware Federated Transformer Framework for Scalable and Explainable IoT Threat Detection

    Saleh Alharbi*

    CMC-Computers, Materials & Continua, Vol.87, No.3, 2026, DOI:10.32604/cmc.2026.073584 - 09 April 2026

    Abstract Real-time threat detection in Internet of Things (IoT) networks requires scalable, privacy-preserving, and interpretable models capable of operating under strict latency constraints. This paper presents EdgeTrustX, a privacy-aware federated transformer framework that addresses these challenges by combining transformer-based representation learning with federated optimisation, differential privacy, and homomorphic encryption. The framework enables collaborative model training across heterogeneous IoT devices without exposing sensitive local data while maintaining computational feasibility for edge deployment. A multi-head attention mechanism integrated with a secure aggregation protocol supports adaptive feature weighting and privacy-protected parameter exchange. To enhance transparency, an explainability module that… More >

  • Open Access

    ARTICLE

    DRAGON-MINE: Deep Reinforcement Adaptive Gradient Optimization Network for Mining Rare Events in Healthcare

    Mohammed Abdullah Alsuwaiket*

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.3, 2026, DOI:10.32604/cmes.2026.078169 - 30 March 2026

    Abstract The healthcare field is fraught with challenges associated with severe class imbalance, wherein such critical conditions like sepsis, cardiac arrest, and drug adverse reactions are rare but have dire clinical consequences. This paper presents a new framework, Deep Reinforcement Adaptive Gradient Optimization Network to Mining Rare Events (DRAGON-MINE), to demonstrate how deep reinforcement learning can be used synergistically with adaptive gradient optimization and address the inherent weaknesses of current methods in the prediction of rare health events. The suggested architecture uses a dual-pathway consisting of a reinforcement learning agent to dynamically reweigh samples and an… More >

  • Open Access

    REVIEW

    Security and Privacy Challenges, Solutions, and Performance Evaluation in AIoT-Enabled Smart Societies

    Shahab Ali Khan1, Tehseen Mazhar2,3,*, Syed Faisal Abbas Shah4, Wasim Ahmad1, Sunawar Khan2, Afsha BiBi5, Usama Shah1, Habib Hamam6,7,8,9

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.3, 2026, DOI:10.32604/cmes.2026.075882 - 30 March 2026

    Abstract The convergence of Artificial Intelligence (AI) and the Internet of Things (IoT) has enabled Artificial Intelligence of Things (AIoT) systems that support intelligent and responsive smart societies, but it also introduces major security and privacy concerns across domains such as healthcare, transportation, and smart cities. This Systemic Literature Review (SLR) addresses three research questions: identifying major threats and challenges in AIoT ecosystems, reviewing state-of-the-art security and privacy techniques, and evaluating their effectiveness. An SLR covering the period from 2020 to 2025 was conducted using major academic digital libraries, including IEEE Xplore, ACM Digital Library, ScienceDirect, More >

  • Open Access

    ARTICLE

    FedPA: Federated Learning with Performance-Based Averaging for Efficient Medical Image Classification

    Atif Mahmood1,*, Yasin Saleem1, Usman Tariq2, Yousef Ibrahim Daradkeh3, Adnan N. Qureshi4

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.3, 2026, DOI:10.32604/cmes.2025.073501 - 30 March 2026

    Abstract Federated learning is a decentralized model training paradigm with significant potential. However, the quality of Federated Network’s client updates can vary due to non-IID data distributions, leading to suboptimal global models. To address this issue, we propose a novel client selection strategy called FedPA (Performance-Based Federated Averaging). This proposed model selectively aggregates client updates based on a predefined performance threshold. Only clients whose local models achieve an F1 score of 70% or higher after training are included in the aggregation process. Clients below this threshold receive the updated global model but do not contribute their… More >

  • Open Access

    REVIEW

    A Review on Penetration Testing for Privacy of Deep Learning Models

    Salma Akther1, Wencheng Yang1,*, Song Wang2, Shicheng Wei1, Ji Zhang1, Xu Yang3, Yanrong Lu4, Yan Li1

    CMC-Computers, Materials & Continua, Vol.87, No.2, 2026, DOI:10.32604/cmc.2026.076358 - 12 March 2026

    Abstract As deep learning (DL) models are increasingly deployed in sensitive domains (e.g., healthcare), concerns over privacy and security have intensified. Conventional penetration testing frameworks, such as OWASP and NIST, are effective for traditional networks and applications but lack the capabilities to address DL-specific threats, such as model inversion, membership inference, and adversarial attacks. This review provides a comprehensive analysis of penetration testing for the privacy of DL models, examining the shortfalls of existing frameworks, tools, and testing methodologies. Through systematic evaluation of existing literature and empirical analysis, we identify three major contributions: (i) a critical… More >

Displaying 1-10 on page 1 of 147. Per Page