Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (30)
  • Open Access

    ARTICLE

    Intelligent Ridge Path Planning for Agriculture Robot Using Modified Q-Learning Algorithm

    A. Sivasangari1,*, V. J. K. Kishor Sonti1, J. Cruz Antony1, E. Murali1, D. Deepa1, A. Happonen2

    CMC-Computers, Materials & Continua, Vol.87, No.3, 2026, DOI:10.32604/cmc.2026.074429 - 09 April 2026

    Abstract In the past two decades, Precision Agriculture has received research attention since the development of robotics. Agricultural robotic equipment and drones, which can be operated by farmers, are appearing more frequently and being used to make the process of farming easier and more productive. This paper attempts to develop a modified Q-learning algorithm. A reinforcement learning algorithm called Q-learning has Q-values that are updated in order to find the best routes for the robotic devices to follow while avoiding any obstacles. Different types of terrain and other factors that influence the development of good routes… More >

  • Open Access

    ARTICLE

    ARQ–UCB: A Reinforcement-Learning Framework for Reliability-Aware and Efficient Spectrum Access in Vehicular IoT

    Adeel Iqbal1,#, Tahir Khurshaid2,#, Syed Abdul Mannan Kirmani3, Mohammad Arif4,*, Muhammad Faisal Siddiqui5,*

    CMC-Computers, Materials & Continua, Vol.87, No.2, 2026, DOI:10.32604/cmc.2026.075819 - 12 March 2026

    Abstract Vehicular Internet of Things (V-IoT) networks need intelligent and adaptive spectrum access methods for ensuring ultra-reliable and low-latency communication (URLLC) in highly dynamic environments. Traditional reinforcement learning (RL)-based algorithms, such as Q-Learning and Double Q-Learning, are often characterized by unstable convergence and inefficient exploration in the presence of stochastic vehicular traffic and interference. This paper proposes Adaptive Reinforcement Q-learning with Upper Confidence Bound (ARQ-UCB), a lightweight and reliability-aware RL framework, which explicitly reduces interruption and blocking probabilities while improving throughput and delay across diverse vehicular traffic conditions. This proposed ARQ-UCB algorithm extends the basic Q-updates More >

  • Open Access

    ARTICLE

    A Hybrid Approach to Software Testing Efficiency: Stacked Ensembles and Deep Q-Learning for Test Case Prioritization and Ranking

    Anis Zarrad1, Thomas Armstrong2, Jaber Jemai3,*

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.072768 - 12 January 2026

    Abstract Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability. While prioritization selects the most relevant test cases for optimal coverage, ranking further refines their execution order to detect critical faults earlier. This study investigates machine learning techniques to enhance both prioritization and ranking, contributing to more effective and efficient testing processes. We first employ advanced feature engineering alongside ensemble models, including Gradient Boosted, Support Vector Machines, Random Forests, and Naive Bayes classifiers to optimize test case prioritization, achieving an accuracy score of 0.98847More >

  • Open Access

    ARTICLE

    FAIR-DQL: Fairness-Aware Deep Q-Learning for Enhanced Resource Allocation and RIS Optimization in High-Altitude Platform Networks

    Muhammad Ejaz1, Muhammad Asim2,*, Mudasir Ahmad Wani2,3, Kashish Ara Shakil4,*

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.072464 - 12 January 2026

    Abstract The integration of High-Altitude Platform Stations (HAPS) with Reconfigurable Intelligent Surfaces (RIS) represents a critical advancement for next-generation wireless networks, offering unprecedented opportunities for ubiquitous connectivity. However, existing research reveals significant gaps in dynamic resource allocation, joint optimization, and equitable service provisioning under varying channel conditions, limiting practical deployment of these technologies. This paper addresses these challenges by proposing a novel Fairness-Aware Deep Q-Learning (FAIR-DQL) framework for joint resource management and phase configuration in HAPS-RIS systems. Our methodology employs a comprehensive three-tier algorithmic architecture integrating adaptive power control, priority-based user scheduling, and dynamic learning mechanisms. More >

  • Open Access

    ARTICLE

    Dynamic Integration of Q-Learning and A-APF for Efficient Path Planning in Complex Underground Mining Environments

    Chang Su, Liangliang Zhao*, Dongbing Xiang

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-24, 2026, DOI:10.32604/cmc.2025.071319 - 09 December 2025

    Abstract To address low learning efficiency and inadequate path safety in spraying robot navigation within complex obstacle-rich environments—with dense, dynamic, unpredictable obstacles challenging conventional methods—this paper proposes a hybrid algorithm integrating Q-learning and improved A*-Artificial Potential Field (A-APF). Centered on the Q-learning framework, the algorithm leverages safety-oriented guidance generated by A-APF and employs a dynamic coordination mechanism that adaptively balances exploration and exploitation. The proposed system comprises four core modules: (1) an environment modeling module that constructs grid-based obstacle maps; (2) an A-APF module that combines heuristic search from A* algorithm with repulsive force strategies from… More >

  • Open Access

    ARTICLE

    Adaptive Path-Planning for Autonomous Robots: A UCH-Enhanced Q-Learning Approach

    Wei Liu1,*, Ruiyang Wang1, Guangwei Liu2

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-23, 2026, DOI:10.32604/cmc.2025.070328 - 09 December 2025

    Abstract Q-learning is a classical reinforcement learning method with broad applicability. It can respond effectively to environmental changes and provide flexible strategies, making it suitable for solving robot path-planning problems. However, Q-learning faces challenges in search and update efficiency. To address these issues, we propose an improved Q-learning (IQL) algorithm. We use an enhanced Ant Colony Optimization (ACO) algorithm to optimize Q-table initialization. We also introduce the UCH mechanism to refine the reward function and overcome the exploration dilemma. The IQL algorithm is extensively tested in three grid environments of different scales. The results validate the… More >

  • Open Access

    ARTICLE

    An Improved Reinforcement Learning-Based 6G UAV Communication for Smart Cities

    Vi Hoai Nam1, Chu Thi Minh Hue2, Dang Van Anh1,*

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-15, 2026, DOI:10.32604/cmc.2025.070605 - 10 November 2025

    Abstract Unmanned Aerial Vehicles (UAVs) have become integral components in smart city infrastructures, supporting applications such as emergency response, surveillance, and data collection. However, the high mobility and dynamic topology of Flying Ad Hoc Networks (FANETs) present significant challenges for maintaining reliable, low-latency communication. Conventional geographic routing protocols often struggle in situations where link quality varies and mobility patterns are unpredictable. To overcome these limitations, this paper proposes an improved routing protocol based on reinforcement learning. This new approach integrates Q-learning with mechanisms that are both link-aware and mobility-aware. The proposed method optimizes the selection of… More >

  • Open Access

    ARTICLE

    A Q-Learning Improved Particle Swarm Optimization for Aircraft Pulsating Assembly Line Scheduling Problem Considering Skilled Operator Allocation

    Xiaoyu Wen1,2, Haohao Liu1,2, Xinyu Zhang1,2, Haoqi Wang1,2, Yuyan Zhang1,2, Guoyong Ye1,2, Hongwen Xing3, Siren Liu3, Hao Li1,2,*

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-27, 2026, DOI:10.32604/cmc.2025.069492 - 10 November 2025

    Abstract Aircraft assembly is characterized by stringent precedence constraints, limited resource availability, spatial restrictions, and a high degree of manual intervention. These factors lead to considerable variability in operator workloads and significantly increase the complexity of scheduling. To address this challenge, this study investigates the Aircraft Pulsating Assembly Line Scheduling Problem (APALSP) under skilled operator allocation, with the objective of minimizing assembly completion time. A mathematical model considering skilled operator allocation is developed, and a Q-Learning improved Particle Swarm Optimization algorithm (QLPSO) is proposed. In the algorithm design, a reverse scheduling strategy is adopted to effectively… More >

  • Open Access

    ARTICLE

    Design and Test Verification of Energy Consumption Perception AI Algorithm for Terminal Access to Smart Grid

    Sheng Bi1,2,*, Jiayan Wang1, Dong Su1, Hui Lu1, Yu Zhang1

    Energy Engineering, Vol.122, No.10, pp. 4135-4151, 2025, DOI:10.32604/ee.2025.066735 - 30 September 2025

    Abstract By comparing price plans offered by several retail energy firms, end users with smart meters and controllers may optimize their energy use cost portfolios, due to the growth of deregulated retail power markets. To help smart grid end-users decrease power payment and usage unhappiness, this article suggests a decision system based on reinforcement learning to aid with electricity price plan selection. An enhanced state-based Markov decision process (MDP) without transition probabilities simulates the decision issue. A Kernel approximate-integrated batch Q-learning approach is used to tackle the given issue. Several adjustments to the sampling and data… More >

  • Open Access

    ARTICLE

    Deep Q-Learning Driven Protocol for Enhanced Border Surveillance with Extended Wireless Sensor Network Lifespan

    Nimisha Rajput1,#, Amit Kumar1, Raghavendra Pal1,#, Nishu Gupta2,*, Mikko Uitto2, Jukka Mäkelä2

    CMES-Computer Modeling in Engineering & Sciences, Vol.143, No.3, pp. 3839-3859, 2025, DOI:10.32604/cmes.2025.065903 - 30 June 2025

    Abstract Wireless Sensor Networks (WSNs) play a critical role in automated border surveillance systems, where continuous monitoring is essential. However, limited energy resources in sensor nodes lead to frequent network failures and reduced coverage over time. To address this issue, this paper presents an innovative energy-efficient protocol based on deep Q-learning (DQN), specifically developed to prolong the operational lifespan of WSNs used in border surveillance. By harnessing the adaptive power of DQN, the proposed protocol dynamically adjusts node activity and communication patterns. This approach ensures optimal energy usage while maintaining high coverage, connectivity, and data accuracy. More >

Displaying 1-10 on page 1 of 30. Per Page