Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (91)
  • Open Access

    REVIEW

    Task Offloading and Edge Computing in IoT—Gaps, Challenges and Future Directions

    Hitesh Mohapatra*

    CMC-Computers, Materials & Continua, Vol.87, No.3, 2026, DOI:10.32604/cmc.2026.076726 - 09 April 2026

    Abstract This review examines current approaches to real-time decision-making and task optimization in Internet of Things systems through the application of machine learning models deployed at the network edge. Existing literature shows that edge-based distributed intelligence reduces cloud dependency. It addresses transmission latency, device energy use, and bandwidth limits. Recent optimization strategies employ dynamic task offloading mechanisms to determine optimal workload placement across local devices and edge servers without centralized coordination. Empirical findings from the literature indicate performance improvements with latency reductions of approximately 32.8% and energy efficiency gains of 27.4% compared to conventional cloud-centric models.… More >

  • Open Access

    ARTICLE

    A Multi-Agent Deep Reinforcement Learning-Based Task Offloading Method for 6G-Enabled Internet of Vehicles with Cloud-Edge-Device Collaboration

    Fangxiang Hu1, Qi Fu1,2,*, Shiwen Zhang1, Jing Huang1

    CMC-Computers, Materials & Continua, Vol.87, No.3, 2026, DOI:10.32604/cmc.2026.074154 - 09 April 2026

    Abstract In the Internet of Vehicles (IoV) environment, the growing demand for computational resources from diverse vehicular applications often exceeds the capabilities of intelligent connected vehicles. Traditional approaches, which rely on one or more computational resources within the cloud-edge-device computing model, struggle to ensure overall service quality when handling high-density traffic flows and large-scale tasks. To address this issue, we propose a computational offloading scheme based on a cloud-edge-device collaborative 6G IoV edge computing model, namely, Multi-Agent Deep Reinforcement Learning-based and Server-weighted scoring Selection (MADRLSS), which aims to optimize dynamic offloading decisions and resource allocation. The… More >

  • Open Access

    ARTICLE

    DRAGON-MINE: Deep Reinforcement Adaptive Gradient Optimization Network for Mining Rare Events in Healthcare

    Mohammed Abdullah Alsuwaiket*

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.3, 2026, DOI:10.32604/cmes.2026.078169 - 30 March 2026

    Abstract The healthcare field is fraught with challenges associated with severe class imbalance, wherein such critical conditions like sepsis, cardiac arrest, and drug adverse reactions are rare but have dire clinical consequences. This paper presents a new framework, Deep Reinforcement Adaptive Gradient Optimization Network to Mining Rare Events (DRAGON-MINE), to demonstrate how deep reinforcement learning can be used synergistically with adaptive gradient optimization and address the inherent weaknesses of current methods in the prediction of rare health events. The suggested architecture uses a dual-pathway consisting of a reinforcement learning agent to dynamically reweigh samples and an… More >

  • Open Access

    ARTICLE

    A New Approach for Topology Control in Software Defined Wireless Sensor Networks Using Soft Actor-Critic

    Ho Hai Quan1,2, Le Huu Binh1,*, Nguyen Dinh Hoa Cuong3, Le Duc Huy4

    CMC-Computers, Materials & Continua, Vol.87, No.2, 2026, DOI:10.32604/cmc.2026.075549 - 12 March 2026

    Abstract Wireless Sensor Networks (WSNs) play a crucial role in numerous Internet of Things (IoT) applications and next-generation communication systems, yet they continue to face challenges in balancing energy efficiency and reliable connectivity. This study proposes SAC-HTC (Soft Actor-Critic-based High-performance Topology Control), a deep reinforcement learning (DRL) method based on the Actor-Critic framework, implemented within a Software Defined Wireless Sensor Network (SDWSN) architecture. In this approach, sensor nodes periodically transmit state information, including coordinates, node degree, transmission power, and neighbor lists, to a centralized controller. The controller acts as the reinforcement learning (RL) agent, with the… More >

  • Open Access

    ARTICLE

    A Novel Evolutionary Optimized Transformer-Deep Reinforcement Learning Framework for False Data Injection Detection in Industry 4.0 Smart Water Infrastructures

    Ahmad Salehiyan1, Nuria Serrano2, Francisco Hernando-Gallego3, Diego Martín2,*, José Vicente Álvarez-Bravo2

    CMC-Computers, Materials & Continua, Vol.87, No.2, 2026, DOI:10.32604/cmc.2026.075336 - 12 March 2026

    Abstract The increasing integration of cyber-physical components in Industry 4.0 water infrastructures has heightened the risk of false data injection (FDI) attacks, posing critical threats to operational integrity, resource management, and public safety. Traditional detection mechanisms often struggle to generalize across heterogeneous environments or adapt to sophisticated, stealthy threats. To address these challenges, we propose a novel evolutionary optimized transformer-based deep reinforcement learning framework (Evo-Transformer-DRL) designed for robust and adaptive FDI detection in smart water infrastructures. The proposed architecture integrates three powerful paradigms: a transformer encoder for modeling complex temporal dependencies in multivariate time series, a… More >

  • Open Access

    ARTICLE

    Mobility-Aware Federated Learning for Energy and Threat Optimization in Intelligent Transportation Systems

    Hamad Ali Abosaq1, Jarallah Alqahtani1,*, Fahad Masood2, Alanoud Al Mazroa3, Muhammad Asad Khan4, Akm Bahalul Haque5

    CMC-Computers, Materials & Continua, Vol.87, No.2, 2026, DOI:10.32604/cmc.2026.075250 - 12 March 2026

    Abstract The technological advancement of the vehicular Internet of Things (IoT) has revolutionized Intelligent Transportation Systems (ITS) into next-generation ITS. The connectivity of IoT nodes enables improved data availability and facilitates automatic control in the ITS environment. The exponential increase in IoT nodes has significantly increased the demand for an energy-efficient, mobility-aware, and secure system for distributed intelligence. This article presents a mobility-aware Deep Reinforcement Learning based Federated Learning (DRL-FL) approach to design an energy-efficient and threat-resilient ITS. In this approach, a Policy Proximal Optimization (PPO)-based DRL agent is first employed for adaptive client selection. Second, More >

  • Open Access

    ARTICLE

    Heterogeneous Computing Power Scheduling Method Based on Distributed Deep Reinforcement Learning in Cloud-Edge-End Environments

    Jinwei Mao1,2, Wang Luo1,2,*, Jiangtao Xu3, Daohua Zhu3, Wei Liang3, Zhechen Huang3, Bao Feng1,2, Shuang Yang1,2

    CMC-Computers, Materials & Continua, Vol.87, No.2, 2026, DOI:10.32604/cmc.2026.072505 - 12 March 2026

    Abstract With the rapid development of power Internet of Things (IoT) scenarios such as smart factories and smart homes, numerous intelligent terminal devices and real-time interactive applications impose higher demands on computing latency and resource supply efficiency. Multi-access edge computing technology deploys cloud computing capabilities at the network edge; constructs distributed computing nodes and multi-access systems and offers infrastructure support for services with low latency and high reliability. Existing research relies on a strong assumption that the environmental state is fully observable and fails to thoroughly consider the continuous time-varying features of edge server load fluctuations,… More >

  • Open Access

    ARTICLE

    A Regional Distribution Network Coordinated Optimization Strategy for Electric Vehicle Clusters Based on Parametric Deep Reinforcement Learning

    Lei Su1,2,3, Wanli Feng1,2,3, Cao Kan1,2,3, Mingjiang Wei1,2,3, Jihai Wang4, Pan Yu4, Lingxiao Yang5,*

    Energy Engineering, Vol.123, No.3, 2026, DOI:10.32604/ee.2025.071006 - 27 February 2026

    Abstract To address the high costs and operational instability of distribution networks caused by the large-scale integration of distributed energy resources (DERs) (such as photovoltaic (PV) systems, wind turbines (WT), and energy storage (ES) devices), and the increased grid load fluctuations and safety risks due to uncoordinated electric vehicles (EVs) charging, this paper proposes a novel dual-scale hierarchical collaborative optimization strategy. This strategy decouples system-level economic dispatch from distributed EV agent control, effectively solving the resource coordination conflicts arising from the high computational complexity, poor scalability of existing centralized optimization, or the reliance on local information… More >

  • Open Access

    ARTICLE

    Research on UAV–MEC Cooperative Scheduling Algorithms Based on Multi-Agent Deep Reinforcement Learning

    Yonghua Huo1,2, Ying Liu1,*, Anni Jiang3, Yang Yang3

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.072681 - 12 January 2026

    Abstract With the advent of sixth-generation mobile communications (6G), space–air–ground integrated networks have become mainstream. This paper focuses on collaborative scheduling for mobile edge computing (MEC) under a three-tier heterogeneous architecture composed of mobile devices, unmanned aerial vehicles (UAVs), and macro base stations (BSs). This scenario typically faces fast channel fading, dynamic computational loads, and energy constraints, whereas classical queuing-theoretic or convex-optimization approaches struggle to yield robust solutions in highly dynamic settings. To address this issue, we formulate a multi-agent Markov decision process (MDP) for an air–ground-fused MEC system, unify link selection, bandwidth/power allocation, and task… More >

  • Open Access

    ARTICLE

    DRL-Based Task Scheduling and Trajectory Control for UAV-Assisted MEC Systems

    Sai Xu1,*, Jun Liu1,*, Shengyu Huang1, Zhi Li2

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.071865 - 12 January 2026

    Abstract In scenarios where ground-based cloud computing infrastructure is unavailable, unmanned aerial vehicles (UAVs) act as mobile edge computing (MEC) servers to provide on-demand computation services for ground terminals. To address the challenge of jointly optimizing task scheduling and UAV trajectory under limited resources and high mobility of UAVs, this paper presents PER-MATD3, a multi-agent deep reinforcement learning algorithm with prioritized experience replay (PER) into the Centralized Training with Decentralized Execution (CTDE) framework. Specifically, PER-MATD3 enables each agent to learn a decentralized policy using only local observations during execution, while leveraging a shared replay buffer with More >

Displaying 1-10 on page 1 of 91. Per Page