Open Access iconOpen Access

ARTICLE

crossmark

An Improved Reinforcement Learning-Based 6G UAV Communication for Smart Cities

Vi Hoai Nam1, Chu Thi Minh Hue2, Dang Van Anh1,*

1 Faculty of Information Technology, Hung Yen University of Technology and Education, HungYen, 170000, Viet Nam
2 Visiting Lecturer, Faculty of Software Technology, FPT University, Ha Noi, 100000, Viet Nam

* Corresponding Author: Dang Van Anh. Email: email

(This article belongs to the Special Issue: AI-Driven Next-Generation Networks: Innovations, Challenges, and Applications)

Computers, Materials & Continua 2026, 86(1), 1-15. https://doi.org/10.32604/cmc.2025.070605

Abstract

Unmanned Aerial Vehicles (UAVs) have become integral components in smart city infrastructures, supporting applications such as emergency response, surveillance, and data collection. However, the high mobility and dynamic topology of Flying Ad Hoc Networks (FANETs) present significant challenges for maintaining reliable, low-latency communication. Conventional geographic routing protocols often struggle in situations where link quality varies and mobility patterns are unpredictable. To overcome these limitations, this paper proposes an improved routing protocol based on reinforcement learning. This new approach integrates Q-learning with mechanisms that are both link-aware and mobility-aware. The proposed method optimizes the selection of relay nodes by using an adaptive reward function that takes into account energy consumption, delay, and link quality. Additionally, a Kalman filter is integrated to predict UAV mobility, improving the stability of communication links under dynamic network conditions. Simulation experiments were conducted using realistic scenarios, varying the number of UAVs to assess scalability. An analysis was conducted on key performance metrics, including the packet delivery ratio, end-to-end delay, and total energy consumption. The results demonstrate that the proposed approach significantly improves the packet delivery ratio by 12%–15% and reduces delay by up to 25.5% when compared to conventional GEO and QGEO protocols. However, this improvement comes at the cost of higher energy consumption due to additional computations and control overhead. Despite this trade-off, the proposed solution ensures reliable and efficient communication, making it well-suited for large-scale UAV networks operating in complex urban environments.

Keywords

UAV; FANET; smart cities; reinforcement learning; Q-learning

Cite This Article

APA Style
Nam, V.H., Hue, C.T.M., Anh, D.V. (2026). An Improved Reinforcement Learning-Based 6G UAV Communication for Smart Cities. Computers, Materials & Continua, 86(1), 1–15. https://doi.org/10.32604/cmc.2025.070605
Vancouver Style
Nam VH, Hue CTM, Anh DV. An Improved Reinforcement Learning-Based 6G UAV Communication for Smart Cities. Comput Mater Contin. 2026;86(1):1–15. https://doi.org/10.32604/cmc.2025.070605
IEEE Style
V. H. Nam, C. T. M. Hue, and D. V. Anh, “An Improved Reinforcement Learning-Based 6G UAV Communication for Smart Cities,” Comput. Mater. Contin., vol. 86, no. 1, pp. 1–15, 2026. https://doi.org/10.32604/cmc.2025.070605



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 894

    View

  • 370

    Download

  • 0

    Like

Share Link