Open Access
REVIEW
Survey on AI-Enabled Resource Management for 6G Heterogeneous Networks: Recent Research, Challenges, and Future Trends
1 Centre of Advanced Communication, Research and Innovation (ACRI), Department of Electrical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, 50603, Malaysia
2 School of Computing Sciences, College of Computing, Informatics and Mathematics, Universiti Teknologi Mara, Shah Alam, 40450, Malaysia
3 Centre for Cyber Security, Faculty of Information Science and Technology (FTSM), Universiti Kebangsaan Malaysia (UKM), Bangi, 43600, Malaysia
4 Faculty of Science and Engineering, Waseda University, Tokyo, 169-8555, Japan
5 Faculty of Telecommunications, Posts and Telecommunications Institute of Technology, Hanoi, 11518, Vietnam
* Corresponding Authors: Kaharudin Dimyati. Email: ; Quang Ngoc Nguyen. Email:
Computers, Materials & Continua 2025, 83(3), 3585-3622. https://doi.org/10.32604/cmc.2025.062867
Received 30 December 2024; Accepted 02 April 2025; Issue published 19 May 2025
Abstract
The forthcoming 6G wireless networks have great potential for establishing AI-based networks that can enhance end-to-end connection and manage massive data of real-time networks. Artificial Intelligence (AI) advancements have contributed to the development of several innovative technologies by providing sophisticated specific AI mathematical models such as machine learning models, deep learning models, and hybrid models. Furthermore, intelligent resource management allows for self-configuration and autonomous decision-making capabilities of AI methods, which in turn improves the performance of 6G networks. Hence, 6G networks rely substantially on AI methods to manage resources. This paper comprehensively surveys the recent work of AI methods-based resource management for 6G networks. Firstly, the AI methods are categorized into Deep Learning (DL), Federated Learning (FL), Reinforcement Learning (RL), and Evolutionary Learning (EL). Then, we analyze the AI approaches according to optimization issues such as user association, channel allocation, power allocation, and mode selection. Thereafter, we provide appropriate solutions to the most significant problems with the existing approaches of AI-based resource management. Finally, various open issues and potential trends related to AI-based resource management applications are presented. In summary, this survey enables researchers to understand these advancements thoroughly and quickly identify remaining challenges that need further investigation.Keywords
The rapid advancement of communication infrastructures and the potential proliferation of wireless applications have encouraged the swift adoption of wireless technology [1]. 5G networks have recently been developed to support ultra-reliable and low-latency communications (URLLC), massive machine-type communication (mMTC), and enhanced mobile broadband (eMBB) [2,3]. Anticipated features of 6G networks include a plethora of new services, such as holographic communications, remote surgery and telemedicine, tactile internet, Artificial Intelligence (AI)-driven connectivity, Brain-Computer Interfaces, high precision positioning, and quantum communication [4,5]. The International Mobile Telecommunications-2030 guidelines set up substantial targets for 6G wireless technology to fulfill the broad and sophisticated needs of future applications such as the Internet of Things (IoT) and smart surfaces [6]. The most important performance indicators that are anticipated for 6G are 1 terabit per second (Tbps) data rate, 0.01 to 0.1 ms latency, 1000 km/h mobility, and 10 million devices/km2 connection density [7]. Compared to 5G, 6G demonstrates the ability to provide terahertz bandwidth and a promising data rate, along with increased reliability and reduced latency [8,9]. In addition, to fulfill the various demands of 6G, AI is anticipated to facilitate fully autonomous systems that incorporate distributed learning models [10].
The purpose of AI-enabled 6G networks is to automate operations, analyze massive data, and create intelligent cloud, edge, and fog nodes [11]. Ultimately, the main goal is to establish a seamless end-to-end connection that is impossible with current 5G standards [12]. In 6G networks, heterogeneous devices must meet diverse Quality of Service (QoS) demands to deploy network resources intelligently. Low-latency and ultra-reliable networks are needed to guarantee real-time data and seamless network connectivity between autonomous vehicles [13]. The physical infrastructure is expected to support various wireless network applications such as extended reality, telemedicine, and video streaming [14]. The development of these applications poses significant issues for resource management in 6G networks, as they require network services with particular performance features, including mobility management, latency reduction, energy efficiency, and spectrum efficiency [15]. The architecture standards for 6G networks must urgently tackle the previously mentioned challenges to maximize the effective utilization of network resources [16]. In addition, effective resource management is essential for facilitating information sharing among Devise to Devise (D2D) communication, map navigation, health alert notifications, etc. [17,18]. The ability to concurrently achieve the requirements of reliability and energy-and-spectrum-efficient communication is going to be particularly challenging. Therefore, it is essential to develop collaborative optimization solutions for resource management issues in 6G network applications.
These challenges include channel allocation, interference management, user association, and power allocation, which are crucial for meeting diverse needs [19]. Heuristic and suboptimal optimization approaches are used to solve traditional resource management challenges in wireless networks [20]. Moreover, global optimization algorithms are unable to resolve the NP-hard joint optimization challenges of spectrum efficiency and energy efficiency in the forthcoming 6G networks due to their complexity [21]. The field of artificial intelligence, which includes machine learning algorithms, has emerged as a potential solution to maintain computationally challenging and NP-hard issues [22,23]. This development encouraged researchers to adopt machine learning algorithms to address joint optimization issues in 6G wireless networks. Additionally, robustness, reliability, and resource efficiency have become more concerning with 6G networks. Hence, intelligent resource management in 6G networks, which machine learning enables, necessitates a radical departure from conventional resource management approaches [24].
Furthermore, 6G wireless network optimization based on AI methods is influenced by the high data rates offered by the terahertz spectrum [25]. AI approaches are expected to deliver cognitive services, effective spectrum management, and autonomous network installations [26]. Moreover, the integration of 6G networks and machine learning may efficiently enhance resource utilization and enable real-time learning for autonomous systems [27]. Achieving a balance between the anticipated exponential increase in data traffic, the integration of sensor-based services, and the tremendous densification of networks is crucial in the progress toward 6G networks [28]. Thus, the 6G networks enabled by AI will play a crucial role in society and industry, meeting the communication demands of machines and humans [29,30]. Fig. 1 illustrates a vision of a 6G wireless network.

Figure 1: A vision of 6G wireless network
This survey article comprehensively reviews the prospective advantages and challenges of AI approach integration with the latest technologies in 6G networks. Improving the efficiency of 6G wireless networks is addressed, along with the issues of managing resources using AI approaches. This article explains the methods used by current studies to address intelligent resource management in 6G networks using AI techniques. These AI techniques are classified into Deep Learning (DL), Federated Learning (FL), Reinforcement Learning (RL), and Evolutionary Learning (EL). Subsequently, we evaluate the AI methods based on optimization challenges such as user association, channel allocation, power allocation, and mode selection. Furthermore, outstanding issues and possible developments in AI-based resource management are discussed. Fig. 2 shows the survey structure.

Figure 2: Survey structure
2 Motivations and Contributions
Our innovative and comprehensive survey addresses this knowledge gap and encourages more research on 6G network intelligent resource management using AI. We produced recent works on AI types including DL, FL, RL, and EL, and analyzed them according to optimization challenges such as user association, channel allocation, power allocation, and mode selection. Additionally, this review offers an innovative perspective and categorization of recent literature, with a particular emphasis on AI-enabled 6G networks. Consequently, we can define the unresolved challenges and issues that are associated with intelligent resource management. In addition, we suggest many intriguing future research topics in the design principles of 6G wireless applications that AI applications enable.
The main contributions are summarized below:
1. Provide a summary of current AI methods for 6G network resource management applications, including DL, FL, RL, and EL.
2. Investigate the AI methods in the context of optimization problems such as user association, channel allocation, power allocation, and mode selection.
3. Summarize the AI-assisted methods, including the learning types, optimization issues, applied scenarios, advantages, and limitations.
4. Identify the unsolved issues, areas of study that need more investigation, and potential remedies related to the future directions of machine learning applications for the standard design of 6G networks.
In this paper, we discuss different aspects of AI-based 6G wireless networks as shown in Fig. 2. The rest of the paper is organized as follows: In Section 3, we summarize the related research and provide an overview of the current research surveys that use AI in wireless communication networks. Then, we provide an overview of AI-based resource management in Section 4. Section 5 provides an overview of the present AI approaches used for resource management, including DL, FL, RL, and EL. In addition, Section 6 explores the upcoming trends and directions for resource management in future wireless communications that AI powers. Section 7 shows the comparative analysis of several AI Strategies for 6G HetNets Finally, the conclusion of the paper is presented in Section 8.
The use of AI for future communications has been the subject of several research studies in the past few years, resulting from the prospective advantages that AI can potentially gain. In [31], the integration of Machine Learning (ML) and Blockchain Technology (BCT) is the main goal of the study. The authors evaluated the implementation of ML in 6G, examining applications and incorporation with conventional and non-conventional media communication, as well as the incorporation of it in the 6G-IoT network. Furthermore, the complicated configuration of wireless communication systems demands an extensive review of privacy and security, which has inspired extensive study of BCT. This review paper analyzed BCT’s characteristics, framework, and utilization in various communication environments. Moreover, they performed a comprehensive examination of AI and BCT, studying their effect on communication networks. Implementing this integrated approach has specific characteristics crucial for wireless communication and networks, including model sharing and decentralized data.
In [32], the authors comprehensively reviewed ML-enabled wireless communication in 6G networks. It investigated current machine learning approaches, including supervised and unsupervised machine learning, federated learning, reinforcement learning, deep learning, and deep reinforcement learning for resource management applications. The authors investigated the performance of ML methods in three different network classifications, specifically D2D, fog-radio access, and vehicle networks. Moreover, they presented a summary of ML based on new algorithms designed to address the challenges regarding resource allocation, task offloading, and handover. Eventually, they emphasized the open issues, challenges, and potential solutions, in addition to future investigation designs, in the framework of 6G wireless applications. In [33], the authors presented an extensive examination and comprehensive evaluation of AI-RAN’s vision and current issues. Firstly, the authors introduced a brief overview of 6G AI-RAN, then discussed the existing 5G RAN methodologies and the challenges that must be addressed to successfully deploy 6G AIRAN. Furthermore, the study analyzed state-of-the-art research areas in AI-RAN, focusing on spectrum allocation, network design, and resource management issues. Moreover, they focused on several approaches to tackle these issues, which include employing modern ML and edge computing techniques to enhance the efficiency of 6G AI-RAN.
In [34], the authors investigated the implementation of AI-enabled applications to handle various aspects of 6G mobile communications, particularly intelligent network management and mobility, channel coding, massive Multi-Input Multi-Output (MIMO), and beamforming. This research has shown that the 6G framework, employing AI approaches, can intelligently manage network configurations and various resources related to slices, computational power, caching, energy, and communications to meet continually changing demands. Following that, the authors have identified many challenges and potential investigation areas for improving AI 6G networks. Based on AI 6G, wireless networks experience multiple challenges, specifically in the management of massive data, complex algorithms, interference control, energy efficacy, and privacy. This paper highlighted several future areas of study which include the implementation of intelligent spectrum management, beamforming based on AI, autonomous networks, the integration of quantum technology and AI, satellite networks enhanced by AI, the emphasis on environmental sustainability, network slicing improved by AI, and the utilize of distributed ML for communications. In [35], the authors extensively reviewed resource management techniques in developed Heterogeneous Networks (HetNets). A comprehensive evaluation of the existing resource management methods for HetNets is provided. Moreover, this article reviewed recent studies in various aspects of resource management regarding user association, power allocation, mode selection, and spectrum efficiency to concentrate on existing research gaps. Approaches, criteria, methods, strategies, and structures have organized the provided resource management aspects in various network scenarios. In addition, this paper presented the efficient approaches utilized by HetNets to address the challenges of intelligent communications.
In [36], the authors extensively investigated the suitability of DRL and RL approaches for resource allocation in 6G network slicing. The authors conducted a comprehensive examination of relevant research studies and examined the feasibility and effectiveness of the suggested methods in tackling issues regarding resource allocation, admission control, resource orchestration, and resource scheduling. Furthermore, they evaluated the methodologies based on the optimization objectives, including the network’s emphasis, the range of potential states and actions, the algorithms employed, the framework of deep neural networks, and the balance between exploration and exploitation. In [37], a comprehensive examination of resource optimization is investigated, including both current and previous investigations on approaches, indicators, and application scenarios. They also demonstrated the significance of resource optimization for the next 6G networks. Moreover, the authors investigated various current methodologies for evaluating the performance of 6G IoT networks, especially concentrating on metrics such as bandwidth, reliability, latency, EE, and throughput. Finally, this paper analyzed the challenges experienced in this area of study and provided promising techniques for future research to enhance the performance of 6G IoT networks.
In [38], the authors provided a resource management framework that utilizes AI in future Beyond Five Generation (B5G)/6G networks to examine the significance of AI in resource allocation for future wireless communications systems. To comprehensively investigate the present innovations in relevant research, this study has reviewed and compared resource management strategies in two classifications: AI-based and model-based methods. Furthermore, they have highlighted the issues that must be overcome to incorporate AI into existing and future wireless networks. These challenges include the requirement of datasets and test cases. Additionally, they have discussed opportunities, such as exploring the theoretical performance constraints based on artificial intelligence resource management, mapping scenarios, goals, and algorithms, utilizing both broad learning and deep learning techniques, and investigating innovative approaches for achieving performance metrics with clarifying artificial intelligence. In [39], the authors concentrated on utilizing AI and ML features to enhance 6G networks and optimize resource management. They demonstrated advanced terahertz approaches, including ML-based terahertz channel estimations and spectrum allocation, that have been regarded as innovative in accomplishing high-speed transmission across a wide range of frequencies. Furthermore, they employed AI and ML applications in power management, specifically for energy-harvesting networks. In addition, this review paper concentrated on AI and ML technologies to improve security in IoT systems, which involves the enhancement of authentication, attack detection processes, and access control. Also, they conducted effective handover management and mobility strategies using Q-learning, DRL, and DL to provide highly reliable and secure connections and meet the requirements of 6G dynamic networks. Intelligent resource allocation methods, such as traffic, storage, and computation offloading techniques, have been recognized as potential approaches for satisfying low latency demands in 6G applications.
In [40], the authors comprehensively reviewed the influential “learning to optimize” methods in various fields of 6G networks. The major optimization issues and specifically designed ML frameworks are identified and examined. Specifically, this paper focused on algorithm unrolling, graph neural networks, DRL, end-to-end learning, and wireless federated learning for distributed optimization. These techniques are designed to tackle challenging problems that arise in various significant wireless applications. The paper also covers issues such as the architecture of neural networks, theoretical tools employed in various ML techniques, implementation obstacles, and future research objectives. These discussions aimed to provide practical guidance for using ML models in 6G wireless networks. In [41], the authors performed a comprehensive analysis of the implementation of AI and ML in 6G networks to improve the perspective of the future IoT. They discussed the development of communication systems and emphasized the importance of incorporating AI/ML algorithms in the structure of the systems. This paper also explored the potential of AI/ML algorithms to improve the IoT services supplied by smart facilities. It particularly concentrated fields of application, such as smart agriculture, smart healthcare, smart transportation, and smart industry, that can perform effectively.
In [42], the authors presented an in-depth investigation of various ML, DL, RL, and DRL algorithms that can be employed to optimize the challenges while developing technologies to satisfy the demands of 6G networks. They demonstrated that the utilization of ML algorithms may efficiently tackle an extensive range of issues associated with SE, EE, throughput, decreasing computation, and developing reliable and secure channels for communications. Nevertheless, while these techniques have been effective in previous research, it is crucial to recognize that additional investigation is required to optimize their efficacy in advancing innovation. In conclusion, they presented potential ML approaches to tackle obstacles that could occur in 6G networks efficiently. These challenges include expanding coverage, connecting terrestrial and non-terrestrial systems, reducing latency, and incorporating sensing and communication techniques. However, the proposed solutions often lack comprehensive validation in real-world scenarios, highlighting the need for more empirical studies and experimental validation to assess their effectiveness and scalability. In addition, none of the aforementioned surveys categorized the machine learning into DL, FL, RL, and EL, and analyzed the AI approaches according to user association, channel allocation, power allocation, and mode selection. In summary, while the present investigations provide useful insights into resource management in 6G networks, further research is required to tackle the practical issues and restrictions faced in real-world implementations. In contrast to prior investigations that provide a broad summary of AI approaches for 6G networks, our survey provides an organized taxonomy of AI approaches, such as DL, FL, RL, and EL, specifically applied to resource management challenges such as user association, channel allocation, power control, and mode selection. We further demonstrate the trade-offs between AI model accuracy, complexity, and real-time application, hence illuminating their viability in realistic 6G environments. Additionally, a comparative analysis section has been included, differentiating our research from existing surveys regarding scope, contributions, and methodology. This study identifies significant resource management challenges in AI-based 6G networks and proposes innovative research directions, such as hybrid AI models. Table 1 presents a concise overview of the current research on the integration of machine learning in 6G wireless communication networks.

4 An Overview of Resource Management in 6G Networks
Efficient allocation and utilization of network resources is a crucial aspect of 6G wireless communication networks. Meeting the increasing demands of emerging services and applications requires efficient use of resources [34]. 6G networks are expected to be available to billions of devices with wide connectivity, low latency, and high transmission rates [43]. Diverse methods of managing resources, including spectrum allocation, power control, user association, and interference management, are required to accomplish these ambitious objectives [44]. The effective distribution of the accessible spectrum is one of the main obstacles to 6G resource management. 6G is expected to use a diverse array of frequency bands, surpassing those of previous generations [45]. These bands include millimeter-wave (mmWave) and THz. Distinctive characteristics and issues are associated with each of these bands [46]. For example, mmWave and THz bands have enormous data rates but significant path loss and need sophisticated beamforming methods for reliable communication. Dynamic spectrum allocation based on network circumstances and user needs is essential for efficient spectrum management [47]. Power control is an additional significant 6G resource management aspect.
Managing interference is becoming more challenging as the number of devices increases and networks are densified by deploying small cells [48]. To minimize interference, maximize energy efficiency, and guarantee QoS for various applications, reliable power management systems are crucial [29]. The network environment and user mobility must be considered while adjusting transmission power. In addition, user association and load balancing are essential aspects of resource management in 6G networks. As 6G encompasses a wide range of technologies, from macro cells to small cells, D2D communication to satellite networks, it is crucial to intelligently pair users with the most suitable network nodes [49]. Signal strength, network congestion, and user mobility are all aspects that must be taken into account. By balancing traffic across networks, effective user association schemes increase network efficiency and user experience. The utilization of high-frequency and dense network deployment makes interference control a more significant issue in 6G networks [50,51]. Interference alignment, Intelligent reflecting surfaces, and coordinated multi-point transmission are some of the advanced interference reduction strategies that are anticipated to be pivotal in interference management [52]. These strategies need advanced coordination and immediate adjustment to the dynamic network environment.
The management of resources in 6G networks is expected to experience a revolutionary change because of the advent of AI and ML [53]. To optimize resource allocation, anticipate traffic patterns, and react to varying network circumstances, ML algorithms can analyze tremendous amounts of data [54]. For instance, consider the applications of DL models in traffic management for predictive analytics and RL in optimizing power regulation and spectrum allocation via environmental learning [55]. FL is appropriate for user association and load-balancing activities since it provides a means to use distributed information across various devices while maintaining privacy [56]. EL optimizes complicated processes including dynamic resource allocation and network setup, improving network efficiency [57]. In general, the management of resources in 6G wireless communication networks requires a comprehensive methodology that deals with many difficulties related to spectrum allocation, power control, user association, and mode selection. AI/ML methods improve the effectiveness and flexibility of these procedures, thereby enabling the full utilization of 6G networks. As advancements in research and development progress in this area, it will be crucial to use inventive methods and techniques to fulfill the challenging demands of upcoming wireless communication.
5 AI-Based Resource Management in 6G Networks
Advancements in AI played an essential role in 6G networks, offering remarkable improvements in connection, efficiency, and speed [44]. Researchers are using AI algorithms to optimize network management, forecast traffic patterns, and enhance re-source allocation [58]. AI models are used in dynamic spectrum management to maximize throughput while minimizing interference in real time to make predictions and assign spectrum resources. Improved reliability is also achieved via the deployment of AI-driven predictive maintenance, which identifies and fixes any network issues before they affect service quality. Consequently, AI will play a crucial role in developing 6G networks, enabling more intelligent, secure, and faster wireless communication [59]. Deep learning, reinforcement learning, federated learning, and evolutionary learning are some of the machine learning techniques used in modern wireless networks. AI approaches are categorized based on their distinct applications in tackling optimization issues of resource management in 6G networks. These approaches offer abilities in terms of scalability, adaptability, and decision-making. However, Deep Reinforcement Learning (DRL) and Federated Reinforcement Learning (FRL) are two examples of hybrid techniques that incorporate attributes from several different models of learning to improve overall performance. The classification of AI types is explained in Fig. 3.

Figure 3: Classification of AI approaches
Modern wireless networks are undergoing a dramatic transformation due to the efficiency and effectiveness achieved by deep learning algorithms [58]. These methods improve modulation and demodulation accuracy and efficiency in advanced signal processing. Additionally, they are vital in enhancing data throughput and reducing congestion via dynamic resource allocation and traffic prediction, all of which contribute to optimal network management. When applied to wireless networks, deep learning makes them more intelligent, flexible, and reliable. Both supervised and unsupervised learning fall under the general category of deep learning [59,60]. Supervised learning uses labeled data to train algorithms, whereas unsupervised learning uses unlabeled data to find patterns and structures.
Supervised learning trains algorithms on a labeled dataset to produce accurate predictions or classifications based on newly acquired information. In [61], the researchers investigate the issue of user association via the perspective of deep learning. They present a deep learning approach to link user equipment to competitive macro and small base stations. To find the asymptotically optimum solution for labeling in supervised learning, the user association issue is formulated as an optimization problem. They analyze the accuracy of the proposed method after training the U-Net model. Based on the simulation results, the proposed scheme outperforms the Genetic Algorithm (GA) scheme in terms of computation time and scalability, and it approaches the GA scheme in terms of cumulative rate gain. However, channel allocation and power control are still an open issue. In [62], the authors investigate the user association and resource allocation problems in HetNets. Their primary objective is to optimize Energy Efficiency (EE) while considering the limitations imposed by QoS, power, and interference. In particular, the Lagrange dual decomposition approach is used to handle the problem of user association. On the other hand, semi-supervised learning and Deep Neural Networks (DNN) are utilized for resource allocation. According to the simulation, the proposed approach has the potential to produce greater EE while simultaneously reducing complexity. However, the cross-tier interferences need to be considered.
In [63], the researchers examined the challenges associated with future TV broadcasting via 5G wireless mobile networks. They present a framework for allocating multimedia resources in TV broadcasting using a network slice-based approach. The system includes a proactive network-slicing architecture and a detailed explanation of its operational mechanism. Then, a prediction technique is specifically developed for resource allocation in the context of TV broadcasting service demand. The suggested architecture prioritizes optimizing energy efficiency by considering the combined transmit power and bandwidth in allocating 5G resources. Deep reinforcement learning is used to effectively handle the system complexity, and a convex problem technique is presented to obtain the best solution and speed up the process of training. Results indicate that the suggested strategy can properly anticipate multicast service needs and increase network energy efficiency under specific QoS criteria and temporal fluctuations. However, more accurate multicast service demand must be considered using a deep learning model with additional cells and layers. In [64], the authors examine the optimization issue of spectral efficiency (SE) in the context of a Massive MIMO network, taking into account different numbers of users. The optimization problem is formulated as joint power and data problem. Given the non-convex nature of the issue, they developed an innovative iterative technique that reaches an equilibrium point in polynomial time. Furthermore, they provide a deep learning solution to facilitate real-time deployment. Data and pilot powers are predicted by the proposed PowerNet neural network, which only makes use of large-scale fading information. The main contribution is to create a neural network that can accommodate a constantly changing number of users, allowing PowerNet to approximate numerous power control functions with varied inputs and outputs. The results demonstrate the superiority of the proposed approach over the iterative algorithm in terms of SE. Nevertheless, the channel allocation problem needs to be considered.
In [65], the researchers examined the issue of handover in 5G networks. The paper aims to transform the handover issue into a classification problem and then solve it using deep learning, as it is a typical technology that may produce very accurate results for classification problems. The proposed method considers two user features: Signal to Interference Noise Ratio (SINR) and SINR change. The simulation results indicate that the proposed method has the lowest rates of radio connection failure and ping-pong compared to the benchmark mechanisms. However, the system throughput needs more consideration and attention. In [66], the authors investigated the power allocation of the Cloud Radio Access Network (C-RAN), optimizing the selection of Remote Radio Heads (RRH), and providing information about the associated connections that correspond to those RRH. A neural network-based optimization model was introduced with the objective of optimizing RRH selection. The Group Sparse Beamforming technique was used to evaluate the model and provide near-optimal solutions for power consumption. The obtained results encourage the use of machine learning methodsto mitigate the power consumption and complexity associated with this rising field. However, user fairness needs to be considered.
In [67], an ML approach is employed to address the cell selection problem in 5G Ultra-Dense Networks (UDNs). A cell selection approach based on a neural network is proposed that employs a trained back-propagation model to execute the problem of small base station selection effectively. The objective is to extend the duration of presence within small cells, thereby reducing the handovers. The trained suggested model is capable of predicting the optimal small base station with a high level of precision and a minimal error rate. The results demonstrate that the proposed approach effectively accomplishes its objectives by reducing the rate of handovers and extending the duration of vehicle presence throughout small cells. Consequently, the occurrence of failed and needless handovers is reduced. Furthermore, compared to non-ML approaches, computational complexity is decreased. However, additional input features must be considered throughout training to ensure the model’s applicability across different scenarios and environments.
Unsupervised learning uses algorithms to find patterns, structures, and connections in unlabeled data. In [68], an iterative channel allocation method and a power allocation method based on DNNs with unsupervised learning are proposed in downlink Non-Orthogonal Multiple Access (NOMA)-based HetNet. The optimization issue aims to maximize the total rate while maintaining the QoS demand. The presented algorithm offers a sum rate and outage probability comparable to those of the interior point method, which can provide the ideal solution but has incomprehensibly greater complexity than the proposed scheme. Furthermore, when contrasted with the traditional two-sided matching technique, the proposed channel allocation approach obtains a higher NOMA gain. However, the interference among users needs to be considered. In [69], the problem of power allocation and user association in dynamic HetNets is investigated. The authors propose a novel approach for reducing computational complexity in dynamic HetNets using unsupervised learning-based user association and power control algorithms. They develop an unsupervised learning approach using a recurrent neural network that can be adjusted to accommodate different numbers of users. Extensive simulations have shown that the suggested approach outperforms existing optimization-based techniques in terms of fairness performance. However, the inter-cluster interference needs to be considered.
In [70], for mmWave massive MIMO HetNets, the authors suggest two hybrid precoding methods that use unsupervised learning with graph attention networks and convolutional neural networks. The proposed methods significantly enhance both the training efficiency and the difficulty of acquiring samples. Graph attention networks and multi-head mechanisms are investigated for their great learning capacity for obtaining correct precoding vectors by analyzing network spatial properties and reducing algorithm computational complexity without anticipating graph structure. Extensive simulations demonstrate that the proposed algorithms exhibit significant advantages in terms of enhancing SE and EE while maintaining low computational complexity. Nevertheless, the proposed algorithm examined a limited number of users. In [71], a DNN allocates resources for a 5G massive MIMO network. The multi-objective sine cosine algorithm is used to optimize the objective functions. Data rate, SINR, power consumption, and EE are the objective functions used in this optimization procedure. Subsequently, the optimized goal functions are assigned to the neural network to distribute resources. Additionally, the fairness index for the resource distribution procedure based on neural networks is also determined. The results of the proposed approach demonstrate that it offers superior performance compared to other existing approaches. However, the tradeoff between SE and EE needs to be considered.
In [72], the authors propose a deep learning algorithm to tackle the power allocation problem in C-RAN. User association is considered in the optimization issue to represent an actual cellular environment accurately. The authors conduct a comprehensive study to examine the trade-offs encountered while using a deep learning-based solution for power allocation. Achieving near-optimal performance with a low computing complexity is shown by the results of the proposed method. Nonetheless, the proposed deep learning method does not always guarantee an acceptable outcome. In [73], the authors investigate the problem of resource allocation for the URLLC network in order to maximize energy efficiency. The Dinkelbach transformation is used to turn the initial fractional optimization issues into linear problems. A DNN is used to formulate the power control function. The proposed approach provides superior results compared to the random power allocation and the QoS demands are effectively ensured. Simulation results confirm that the proposed approach can significantly enhance the system’s EE. Furthermore, the training method exhibits rapid convergence while maintaining a low complexity. However, an appropriate channel allocation strategy needs to be considered.
5.1.3 Open Issues and Proposed Solutions
Concerning channel allocation, power allocation, user association, mode selection, and interference control, there are still several unsolved issues concerning deep learning approaches. 6G environments are dynamic and varied, making it difficult to use supervised DL models for channel and power allocation without large amounts of labeled data. To address this, data augmentation and transfer learning could decrease the need for labeled data and enhance the capacity to generalize models. However, unsupervised DL has challenges in effectively capturing the intricate patterns in these tasks because of the lack of labeled data. Advanced clustering and self-supervised learning methods improve the capacity to learn from unlabeled raw data. Regarding user association and mode selection, it’s important to consider the limitations of supervised DL and unsupervised models. Supervised DL models may struggle to adapt to different scenarios, while unsupervised models may not fully capture user behavior and network conditions. These tasks may be improved by hybrid techniques that combine smaller datasets with greater amounts of unlabeled data. Both supervised and unsupervised DL face enormous challenges in interference control because of their high noise and unpredictability. Interference may be dynamically managed via reinforcement learning, where the model optimizes behaviors depending on rewards. Explainable AI methods play a vital role in offering valuable insights into the decision-making process of deep learning models. They contribute to the improvement of trust and transparency in AI systems. The primary objective of these solutions is to use DL to its fullest capacity to tackle the complicated issues with 6G wireless networks. Table 2 demonstrates the recent work of DL approaches.

Federated learning is revolutionizing 6G networks by allowing intelligent application development [74]. Within this framework, edge devices engage in collaborative training of machine learning models by utilizing their local data, hence obviating the need to transmit sensitive information to a centralized server. This method becomes even more important because of the requirement to efficiently analyze the massive data produced by an enormous device. The distributed architecture of 6G networks is perfect for federated learning since it improves data privacy, optimizes network capacity, and decreases latency [75]. There are two main types of federated learning: centralized and decentralized.
Centralized federated learning uses several edge devices to train models independently and submit their updated parameters to a central server to create a global model. In [76], the authors suggested practical techniques for estimating QoS metrics acquired via network data, specifically with the purpose of implementation of in-vehicle applications. The suggested approach depends on the utilization of federated learning to train regression neural networks. The authors demonstrated that this strategy has the advantage of providing predictions that are part of centralized training, without requiring the transmission of raw measurement data from vehicles. In addition, they validate this method by recovering classical closed-form delay quantiles based on analytical models of basic queueing mechanisms. The authors demonstrated that their methodology overcomes the limitations of basic models by offering quantile estimates for the complicated environment of vehicle communications, which is achieved by considering various application traffic patterns. However, the handover needs to be taken into consideration.
In [77], An FL system was considered, which includes the use of a single base station (BS) and several mobile users. The local machine learning model is trained by mobile users using their data. The authors developed an incentive system based on an auction game, that involves the BS and mobile users, assumes the role of an auctioneer, and the mobile users represent sellers. In the suggested game, all mobile users submit their bids based on the lowest energy cost they experience while participating in the FL scenario. The primal-dual greedy auction is proposed as an approach for determining winners in the auction and enhancing societal welfare. Eventually, numerical results demonstrated the enhancement of our suggested approach in terms of performance. Nevertheless, signaling overhead among users needs to be considered. In [78], A federated learning approach designed specifically for cellular wireless networks was suggested. A centralized computing site is responsible for training a learning model over several users. Furthermore, they proposed a scheduling technique that enhances the convergence rate. The authors additionally investigated the impact of local computing stages on algorithm convergence. They demonstrated that federated learning algorithms have the ability to tackle the problems assuming the issue of wireless channel unreliability is ignored. Nonetheless, convergence analysis with convex loss functions is still an open issue.
In [79], the authors examined the utilization of wireless power transfer (WPT) in assisted FL, where the cellular BS handles the responsibility of charging the wireless devices (WDs) via WPT and also receives the locally trained models of the WDs for model aggregation throughout each iteration of FL. The authors proposed a joint optimization approach for improving the processing rate of each WD, the WPT-duration for the BS to charge each WD, and the number of local iterations for each WD. The objective is to reduce the total latency of FL iterations till the convergence condition is achieved. Despite its non-convex nature, the investigators divided the problem into two subproblems and proposed a simulated annealing-based technique as an approach to solving them in a sequence effectively. The numerical results demonstrated the superiority of the suggested strategy against heuristic techniques. Nevertheless, user mobility must be taken into account. In [80], the authors incorporate mobile edge computing and digital twin technologies into a hierarchical framework of FL. In situations where the users are outside the coverage area of the small base stations, the framework facilitates the involvement of macro base stations in supporting the local computation of the users. This collaboration effectively decreases transmission delay. Additionally, it maintains the privacy of users and facilitates increased participation of them in the training process, hence enhancing the accuracy of the FL. Furthermore, they provided a deep reinforcement learning approach to address the joint optimization issue of dynamic user association and resource allocation. Reducing energy consumption within a certain time delay is the main goal of this method. The simulation results showed that the suggested system successfully decreases the rate of task transmission failures and energy consumption compared to the baseline scheme. Additionally, it results in cost savings in communication across the digital twin networks. Nevertheless, Interference management needs more consideration and attention.
Decentralized federated learning allows edge devices to immediately share and aggregate model updates, guaranteeing scalable and robust model training. In [81], an investigation has been examined on a distributed energy-efficient resource allocation mechanism. Federated reinforcement optimization is proposed to solve the issue of channel assignment and transmit power. The suggested framework effectively tackled the non-convex problem, addressing the challenges of computational complexity and transmission cost. The results obtained from quantitative analysis and numerical simulations demonstrated that the suggested approach performs better than previous decentralized benchmarks. It also effectively reduced communication costs and minimized the data processing load on the base station compared to the centralized technique. Furthermore, the efficiency of the proposed framework has been validated by simulations. However, energy efficiency needs to be considered. In [82], the authors presented three techniques, specifically random scheduling, round-robin, and proportional fair, for the objective of resource allocation in cellular networks utilizing FL. An evaluation of resource allocation efficiency is conducted using the MNIST fashion dataset, taking into account each user’s convergence speed and time. The performance results indicated that the comprehension of proportional fairness is greatest in three situations when evaluated via a communication round. The results of the research have significant potential for improving resource allocation and user selection in the field of wireless network architecture. Nonetheless, power control needs to be considered.
In [83], the researchers investigated relay selection and resource allocation approaches based on FL for multicellular configurations. The main objective is to decrease the training time for the ML models by using the mobile edge computing characteristics of the FL technique. Specifically, a DNN model trained on several edge devices has been investigated for predicting the most effective relay node for each user. The main improvement of the technique given in this study is the involvement of both EE and SE as network metrics, as well as training time and accuracy, in the DNN training and model aggregation process. Based on the results provided, it is obvious that EE and SE could be greatly enhanced when relay node edge devices are used to perform a part of the ML training employment, in comparison to centralized learning-based techniques. Additionally, the FL method improved training accuracy while substantially lowering training time. However, the tradeoff between the SE and the EE has to be taken into consideration. In [84], the authors examined federated deep reinforcement learning to optimize channel allocation and power management for vehicle-to-vehicle (V2V) communication in a decentralized manner. The proposed methodology leverages the capabilities of DRL and FL to effectively address the reliability and delay demands of V2V communication, while simultaneously optimizing the data rates of the network. They created an individual V2V agent employing the dueling double deep Q-network and developed a reward function to train V2V agents simultaneously. The suggested federated scheme has been validated via simulations and demonstrates superiority over the state of the arts schemes in terms of sum rate. Nevertheless, the mobility of users must be taken into consideration.
In [85], the authors proposed a decentralized resource allocation method based on FL-aided DRL. The objective of this research is to optimize the overall capacity and decrease energy consumption. The authors have provided a brief description of four technologies that support their proposal. D2D communication enhances network performance, FL protects the privacy of users while facilitating a decentralized model training paradigm, and DRL allows users to develop resource allocation policies based on network states. The efficiency of the suggested techniques can be evaluated using simulations according to the mm-wave and THz scenarios individually. The simulation results demonstrated that the suggested approach enables users to dynamically allocate resources based on limited network states resulting in significant improvements in network performance, specifically in terms of throughput and power consumption. Nonetheless, computational complexity needs to be taken into consideration. In [86], a distributed resource allocation approach is presented to optimize EE while simultaneously guaranteeing high QoS to users. A meta-federated reinforcement learning method is developed to deal with wireless channel issues. Users can optimize their transmit power and channel allocation by employing neural network models. Based on decentralized reinforcement learning, the federated learning approach facilitates cooperation and develops mutually beneficial among users. The results indicated that the meta-federated reinforcement learning framework, as proposed, enhanced the efficiency of the reinforcement learning process and reduced overhead. Furthermore, the proposed framework outperforms the conventional decentralized algorithm in terms of energy efficiency performance throughout different scenarios. However, the SE demands additional attention and investigation.
5.2.3 Open Issues and Proposed Solutions
In the context of centralized and decentralized FL, several open issues regarding resource management are still challenging. Certain issues with centralized FL include data privacy issues and considerable communication costs when a central server aggregates models from distributed nodes. The aggregation process could be ineffective because of the diverse network contexts and the varied levels of quality of local models. Adaptive communication mechanisms and compression are proposed to decrease model updates and data transmission. Additionally, it is critical to keep user data while combining models by using privacy-preserving technologies such as secure multiparty computing and differential privacy. On the other hand, decentralized FL has concerns about coordination and consistency. Blockchain technology and consensus algorithms can synchronize and secure network model updates.
Centralized and decentralized FL face the challenge of channel and power distribution in dynamic network circumstances. Models may be trained to respond dynamically to new data using reinforcement learning methods included in FL frameworks. FL models may capture user behaviors and preferences using user feedback loops and personalized learning paradigms for user association and mode selection. Distributed optimization methods help both FL approaches handle interference by learning interference patterns locally and working globally to discover optimum solutions. Improved 6G wireless network efficiency and scalability are the goals of these FL-centric solutions. Table 3 illustrates the recent work of FL approaches.

5.3 Reinforcement Learning (RL)
Reinforcement learning is expected to significantly impact 6G wireless communication networks by providing novel approaches to enhance network performance and efficiently manage resources [27]. RL involves agents acquiring decision-making skills via their interactions with the environment, where they obtain feedback in the form of penalties or rewards [87]. The trial-and-error process enables the system to adjust and respond to fluctuations in network circumstances, resulting in improved efficiency and reliability [88]. RL has several potential uses, such as in energy management, traffic prediction, spectrum allocation, and other areas where more advanced and autonomous network operations are desired. There are two main subfields within RL: deep learning and Q-learning.
5.3.1 Deep Reinforcement Learning
By integrating deep neural networks with conventional reinforcement learning techniques, deep reinforcement learning improves agents’ ability to manage complicated decision-making tasks and high-dimensional state spaces. In [89], the resource allocation problem in D2D communication within cellular networks is investigated. This paper aims to improve the network throughput of both cellular and D2D pairs by determining the appropriate transmission power and spectrum channel for each D2D link. The authors propose a multi-agent deep reinforcement learning approach to reduce computational complexity. This approach involves sharing information among participating devices, such as their positions and resources. The results demonstrate that the suggested method outperforms previous schemes, particularly with a high density of devices, which causes a significant performance improvement. However, the proposed approach needs to be considered with dense user scenarios to examine its effectiveness.
In [90], a deep reinforcement learning-based approach is proposed to jointly optimize the channel and power allocation in a multi-cell network. The proposed approach aims to improve the average system throughput by mitigating co-channel interference and guaranteeing the QoS constraint. The investigated optimization problem is divided into channel allocation and power allocation sub-problems. Particularly, the double deep Q-network method and the deep deterministic policy gradient method are proposed to solve the channel allocation sub-problem and the power allocation sub-problem, respectively. Simulation results demonstrated that the proposed approach achieved superior performance compared to other alternative approaches. However, for communication computing integration to be effective in multi-cell systems, numerous antennas must be considered. In [91], the authors examine the sum rate maximization problem for a wireless power transfer-enabled IoT network. The authors present a DRL technique to determine the sub-optimal offloading decision. Additionally, they develop an efficient solution using the Lagrangian duality technique to find the optimal time allocation. The simulation results confirmed that the proposed approach, which is based on DRL, achieved over 95 percent of the maximum rate while maintaining minimal complexity. Moreover, the proposed approach outperformed the conventional actor-critic method regarding running time, computational efficiency, and convergence speed. However, more complex scenarios need more attention and consideration to prove the superiority of the proposed approach.
In [92], the authors propose a resource allocation approach based on DRL for NOMA-based D2D communications. The DRL-based approach aims to optimize the allocation of channel spectrum and transmit power of D2D links to maximize EE. The proposed approach allows for efficient resource allocation to D2D links by considering the dynamic nature of the environment. The proposed approach demonstrates superior performance regarding fairness, energy efficiency, and coordination of users. The significance of this performance gain is particularly evident in scenarios with severe interference among users. Nonetheless, mode selection needs to be considered to improve the network performance. In [93], the authors investigate the resource allocation problem of NOMA-based D2D communication underlaying HetNets. The introduction of NOMA technology improves the system performance by reducing the interference generated by D2D transmission. A multi-agent deep reinforcement learning approach is proposed to solve the problem of resource allocation, hence improving the system performance. Optimizing power allocation parameters maximizes the D2D user’s throughput, while channel allocation allows cellular user channels to be reused efficiently. The numerical result of the proposed approach demonstrates remarkable convergence and efficiency, which ultimately leads to beneficial system performance. However, the imperfect channel state information needs to be taken into consideration.
In [94], the authors investigate the problem of resource management under imperfect channel state information within IoT networks. A deep reinforcement learning-based approach is proposed to solve the resource management optimization problem. Assisted by a gated recurrent unit layer, the proposed approach is designed to operate in a distributed cooperative manner to enhance the system’s performance. The proposed approach has the potential to produce faster convergence over the current benchmark frameworks while maintaining the same number of iterations. The results of simulations demonstrate the superiority of the proposed approach in comparison to the other reference approaches in terms of EE. Furthermore, the proposed DRL showed its superiority in complex scenarios. Nevertheless, the proposed approach needs to be considered with different scales of networks. In [95], the authors introduce a distributed multi-agent deep reinforcement learning approach to solve a dense wireless network’s joint user association and power allocation optimization problem. The proposed approach is scalable concerning the size and density of the wireless network. Furthermore, the proposed approach takes into consideration the communication delay and feedback. The results of the simulation show that the recommended approach outperforms decentralized approaches in terms of average rates and obtains performance almost equal to, or even better than, a centralized benchmark approach. Nonetheless, the proposed approach needs to be augmented with expert policies that rely on optimization.
Q-Learning, a model-free reinforcement learning system, optimizes decision-making by updating a Q-value table depending on environmental rewards. In [96], the authors investigate the energy efficiency problems associated with D2D communications in HetNets. The authors propose a methodology combining Q-learning to optimize the association between the user equipment and the BS or access point to reduce the consumed power. The proposed approach demonstrates the capability to perform sufficient exploration and exploitation operations to achieve efficient optimization. According to the results, it can be seen that the proposed approach demonstrates a near-optimal solution in the single-cell scenario. However, user association in multi-cell scenarios must be considered. In [97], the authors propose an intelligent D2D clustering strategy, joint resource allocation, and power control algorithm for the D2D network. A hierarchical clustering technique based on machine learning is presented to construct D2D multicast clusters dynamically, taking into account user preference and reliability amongst D2D multicast users. Then, Q-Learning and Lagrange dual decomposition algorithms are proposed to solve resource allocation and power control issues respectively to optimize the total energy efficiency. Results showed that the proposed approach outperforms conventional techniques in terms of overall energy efficiency and throughput. However, the tradeoff between EE and SE needs to be taken into consideration. In [98], a mode selection approach based on the received signal strength (RSS) threshold is proposed in a dense NOMA-based D2D network. To achieve the highest possible sum rate, they propose a multi-agent reinforcement learning method for adjusting the RSS threshold of each SBS in both downlink and uplink scenarios. Numerical results showed that the proposed multi-agent reinforcement learning method achieved higher data transmission rates and greater coverage in dense NOMA-based D2D communication networks. However, power control must be considered.
In [99], the issue of resource allocation for D2D communications in IoT networks is examined to maximize the system’s overall EE. The Q-learning algorithm is used to address the channel allocation problem, while the power control problem is solved using the Dinkelbach method. Simulation results showed that the proposed approach outperforms prior approaches in the literature and achieves considerable EE improvements via spectrum sharing and D2D and IoT energy harvesting. However, cross-tier interference needs to be considered. In [100], the authors propose a joint power allocation and relay selection approach to enhance energy efficiency in relay-assisted D2D communications networks. Dinkelbach algorithm and Lagrange dual decomposition are presented to solve the power allocation problem and guarantee the QoS for all users. Q-learning algorithm is proposed to solve the relay selection problem. Finally, they provide a comprehensive theoretical examination of the suggested approach in terms of signaling overhead and computational complexity. The simulation results confirm that the suggested approach achieves a total EE for D2D pairs extremely close to the theoretical maximum. Nevertheless, the optimization of channel allocation needs to be considered.
5.3.3 Open Issues and Proposed Solutions
There are still several unsolved issues regarding resource management using Q-learning and DRL. These issues include continuous action spaces, which complicate learning and can result in weak suboptimal policies and convergence. Furthermore, conventional Q-learning has problems adapting to new circumstances since it relies on discrete state-action pairings, which are not applicable to unpredictable and dynamic 6G settings. DRL provides an answer by facilitating more scalable learning and managing continuous spaces via the use of neural networks to estimate the Q-values. Nevertheless, overfitting and instability are issues with DRL models, especially in contexts with significant amounts of variability. Experience replays and target networks are suggested to stabilize training and enhance performance. Transfer learning may improve DRL algorithms for efficient resource management by allowing them to depend on prior knowledge from comparable positions and accelerate convergence. Multi-agent reinforcement learning improves network performance by allowing agents to collaborate on interference control. Practical implementation in 6G networks requires regularization and explainable AI strategies to make these models robust and accessible. Table 4 shows the recent work of RL approaches.

5.4 Evolutionary Learning (EL)
Evolutionary learning is becoming popular in 6G wireless communication networks because it simulates natural evolutionary processes to address complicated optimization challenges [101]. This method is very flexible since it evolves solutions across generations using processes like mutation, crossover, and selection, which are ideal for complex and dynamic 6G configurations. Evolutionary learning performs well in network optimization and resource allocation, where standard approaches fail. 6G networks can maximize efficiency, robustness, and performance by leveraging evolutionary concepts [102]. Swarm intelligence and genetic algorithms are two categories of evolutionary learning.
Genetic algorithms replicate natural selection to optimize challenging issues by repeatedly picking, crossing, and altering candidate solutions. In [103], the authors present a power optimization model employing a genetic algorithm to manage power allocation effectively. Assigning the optimum power to each cellular user is determined by each access point using the updated genetic algorithm until it fulfills the fitness criterion. Furthermore, to improve the network’s effectiveness, a weight-based user scheduling mechanism is developed. This algorithm chooses a user for any given base station based on their distance and the RSS indicator. The power optimization model and user-scheduling method were given equal weights for analyzing power consumption and spectrum efficiency performance indicators. The simulation results indicate that the weight-based user-scheduling algorithm exhibited superior performance, which was further validated by the use of a modified genetic algorithm to optimize weight allocation. Moreover, Optimal transmission power distribution improves spectral efficiency while decreasing power consumption. Nevertheless, the authors considered equal weight in the proposed user scheduling algorithm.
In [104], the researchers examine the problem of joint user association and power distribution in 5G mmWave networks. This research aims to reduce power consumption while ensuring the preservation of user QoS, taking into account the on/off switching strategy of the BSs. The problem was first developed as an integer linear programming with the objective of achieving the optimum solution. A heuristic technique based on the GA is presented as a solution to address the NP-hardness of the issue. In comparison to the benchmark solutions, the suggested GA outperforms them in simulations, especially when dealing with high user loads, and presents a near-optimal solution. Nevertheless, power consumption needs more investigation and consideration. In [105], the authors examine the issue of optimizing the topology and routing in integrated access and backhaul (IAB) networks to ensure high coverage probability. They assess the impact of routing on bypassing temporal obstructions and create effective genetic algorithm-based strategies for placing IAB nodes and distributing non-IAB backhaul links. In addition, the service coverage probability is investigated. This probability is defined as the minimum rate needs of the users. Tree foliage, antenna gain, and blockage parameters are all taken into consideration. The simulation results indicate that IAB is a compelling strategy for facilitating the network densification necessary for 5G and future generations when implemented in a suitable network architecture. Nevertheless, co-tier interference among small base stations needs more attention and consideration.
In [106], the authors proposed a resource allocation scheme based on a Quantum-Inspired Genetic algorithm in D2D-based C-RAN. The proposed resource allocation approach integrates the principles of genetic and quantum computing algorithms to distribute resource blocks across cellular and D2D users. The low complexity allocation strategy is determined by using the predicted D2D throughput. Moreover, the utilization of the Quantum-Inspired Genetic algorithm is implemented within the population matrix to determine the optimal strategy for distributing user resources across several D2D users. Simulation results confirm the superiority of the proposed approach compared to the existing methods. However, power control needs to be considered. In [107], the authors investigate interference mitigation techniques in 5G HetNets. They examine the current subcarrier allocation strategies based on GA and Particle Swarm Optimization (PSO) and the dynamic subchannel allocation approaches based on Fractional Frequency Reuse to determine their limitations. The evolutionary biogeography-based dynamic subcarrier allocation approach is proposed as a solution to address the challenges associated with cross-tier interference by using the evolutionary biogeography-based dynamic subcarrier allocation algorithm. The suggested approach candynamically assign the subchannels to both the macro users and small users. The results indicate that the proposed approach has shown superior SINR for both macro users and small users. Additionally, it has reduced the outage probability and enhanced spectral efficiency compared to the currently used fractional frequency reuse-based dynamic subcarrier allocation technique. However, power consumption needs to be considered.
Swarm algorithms employ several agents to explore and optimize solutions cooperatively. In [108], the authors provide innovative algorithms that use linear increasing inertia weight–binary PSO and Soft Frequency Reuse (SFR) techniques. The objective is to reduce power consumption in small cells. The binary PSO algorithm is used to achieve small cell on/off switching, hence minimizing system power consumption while maintaining the QoS achieved by the users. Furthermore, the binary PSO method utilizes the linearly growing inertia weight to improve its convergence and determine the minimal number of active small cells. In addition, they introduced a classification tree-based SFR approach. In this approach, the small cells are partitioned into center and edge regions, with distinct sub-bands assigned to the edge regions of adjacent small cells. This allocation strategy aims to reduce interference among the small cells. The suggested algorithms outperform existing algorithms by reducing network power consumption and small-cell interference, improving system throughput and energy efficiency. However, the accuracy of the proposed algorithm needs to be considered using dynamic inertia weight.
In [109], the authors propose a channel allocation method for NOMA-based 5G backhaul wireless networks based on power consumption and consider the traffic needs in small cells. The research objective is to improve the assignment of uplink/downlink channels in a mutually beneficial manner to enhance user fairness. The method involves two steps. First, the traveling salesman problem is used to allocate channels initially since it resembles many-to-many user-channel allocation. Second, the modified PSO approach is used with a decreasing coefficient that may be a stochastic estimate methodology for allocation updates. Adding a random velocity improves the exploration behavior and convergence rate of modified PSO. The proposed approach outperforms benchmark schemes in fairness and network capacity. Nevertheless, co-tier interference needs to be considered. In [110], to tackle several objective functions, such as user data rate, spectrum efficiency, and energy efficiency of a 5G wireless network with massive MIMO, the authors propose a self-organizing particle swarm optimizer with multiple objectives. The results of the simulation that was conducted showed that the methodology that was provided is an effective and promising way to solve objective issues. However, user fairness must be considered.
In [111], the authors propose a channel celection method that utilizes the Chicken Swarm Optimization (CSO) algorithm to determine the most efficient channel in a massive MIMO network. This scheme aims to optimize power allocation and beam-forming vectors. The primary goal of the objective function is to generate beam-forming vectors that accurately fulfill SINR requirements. The use of the CSO Algorithm involves the generation of beam-forming vectors and power distribution, which are influenced by the properties of the channel. The channel state information is predicted, and subsequently, a projection matrix is constructed using a channel estimating framework. The comparison study results indicate that the suggested scheme exhibits superior SE and EE performance compared to the benchmark schemes. However, the proposed method is examined with a low number of users.
In [112], the authors investigate the use of an artificial neural network to assess an optimization issue of resource allocation and relay selection, taking into account the mobility of users. The proposed approach aims to achieve maximum spectrum efficiency in cooperative HetNets. In order to accomplish this objective, a novel approach for resource allocation and relay selection based on artificial neural networks is introduced and then compared to a resource allocation and relay selection algorithm based on particle swarm optimization. Numerical results demonstrated that the attainment of maximum spectrum efficiency is accomplished via the consideration of relay mobility and the proper allocation of resources. According to the results, the proposed approach demonstrated superior spectrum efficiency compared to the PSO algorithm. Nevertheless, power consumption needs more consideration and attention. In [113], the authors explore a PSO-based joint channel allocation and power control approach to enhance network performance and effectively reduce interference in D2D-based cellular networks. By precisely creating the fitness values for the two issues, the algorithm is able to keep them from being trapped into solutions that are impossible to implement. The simulations have shown that the proposed algorithm has the potential to enhance the network throughput in D2D-based cellular networks. Furthermore, the method exhibits superior performance compared to the non-joint interference management algorithm considered in the study. However, the user delay requirement needs to be considered.
5.4.3 Open Issues and Proposed Solutions
The optimization of resources in 6G wireless networks poses significant challenges in genetic and swarm evolutionary learning. In dynamic 6G network environments, these evolutionary algorithms often exhibit unsatisfactory convergence rates and experience significant computational costs. Rapid convergence is a problem for genetic algorithms since they depend on mutation and crossover processes, which may make it hard to keep solutions diverse. To identify optimum solutions in extremely dynamic environments, swarm intelligence algorithms like ant colony optimization and particle swarm optimization may have difficulty balancing exploration and exploitation. Some of the suggested solutions are hybrid approaches that combine evolutionary algorithms with other optimization methods. For example, adaptive parameter tuning might be used to dynamically change algorithm parameters depending on the network state, or machine learning models could be integrated to control the search process. It is also possible to increase scalability and decrease processing costs by using distributed and parallel computing approaches. Cooperation techniques throughout swarm algorithms may improve interference management’s adaptability to altering interference patterns. The performance of 6G wireless networks may be greatly improved by using these novel technologies that guarantee capacity and flexibility. Table 5 describes the recent work of EL approaches.

6 AI Strategies Analysis in 6G HetNets
A comparative analysis of several AI strategies such as DL, FL, RL, and EL, applied in 6G HetNets are presented in Fig. 4. The analysis highlights the number of research papers dedicated to each aspect by categorizing these strategies based on their application in user admission, channel utilization, and power control. According to the figure, power control is the most thoroughly examined domain among all AI strategies, with the greatest number of publications reported in DL, FL, RL, and EL. This trend highlights the essential function of AI-driven power optimization in 6G networks, where EE and QoS are crucial.

Figure 4: AI strategies distribution in 6G HetNets via different optimization strategies
User admission and channel utilization reflect different research interests among AI strategies. FL demonstrates an equitable emphasis on all three categories, signifying its importance in the efficient management of distributed resources. Conversely, RL focuses less attention on user admission, indicating a possible research vacuum in this domain. The figure shows the increasing significance of AI in tackling critical challenges in 6G HetNets. Nevertheless, the discrepancy in research concentration among various strategies suggests the necessity of additional investigation, particularly in the integration of AI methods to optimize resources comprehensively.
7 Future Trends and Research Directions
This section proposes future research trends combining advanced resource management techniques employing AI algorithms in 6G wireless networks.
7.1 Advanced Network Automation AI
With the use of AI in 6G, several network management operations may be automated, including configuration, optimization, and planning [114]. This greatly improves operational efficiency by eliminating the need for human interaction. Hence, more effective and efficient network planning techniques may be achievable. Predicting traffic demand and allocating resources to discover and optimize congestion in networks are two examples of how AI can potentially be used. Automation of network service configuration is another area where AI shines. Consequently, this may assist in the reduction of errors and the improvement of network configuration reliability. Improving the effectiveness and efficiency of networks is another area where AI becomes useful. Altering routing tables to reduce latency or optimizing resource allocation to increase throughput are two examples of how AI can be used.
7.2 AI for Green Communication
There is still a lack of real-world implementations of the 6G-era investigation into AI-powered green communication services [115]. Energy sources that are both partially controlled (e.g., RF energy) and uncontrollable (e.g., solar) may benefit from deploying AI approaches. Furthermore, AI approaches may be used to analyze the relation between energy harvesting methods, which are uncontrolled but predictable, and future energy harvesting [116]. Since transmission rules for terrestrial networks vary, AI-assisted solutions are becoming more attractive for connecting existing network traces with future transmission policies. Applying these technologies to the Integrated Air Ground-Underwater Network scenario is quite feasible.
To satisfy the enormous demand for spectrum from the vast numbers of IoT devices connecting to 6G networks, current technology can efficiently enhance resource utilization by sharing spectrum [117]. However, the administration expenses, efficiency concerns, and transaction costs associated with the centralized spectrum access scheme limit the expansion of 6G network applications. Recently, the use of blockchain technology has attracted a great deal of significant interest [118]. With blockchain technology, all participants can securely contribute to data blocks, including a time stamp, an encrypted hash, and a transaction record. This creates a distributed database that can handle all computing needs [119]. Thus, a distributed spectrum-sharing method for 6G mobile communication that is secure, flexible, affordable, and efficient can be stated using blockchain technology. A blockchain-based study of the 6G networks resource optimization problem has significant potential.
7.4 Novel Network Architecture
There is great anticipation among 6G networks for the integration of the Smart Integration Identifier network design [120]. Intelligently connected devices are advanced infrastructure initiatives. The smart integration identifier network design can utilize knowledge area concepts for smart identification services. Accordingly, the 6G network within the smart integration identification network architecture is an effective way of receiving enormous devices.
The need for increasingly sophisticated use cases connecting the real and virtual worlds will grow in parallel with the advent of 6G [121]. The internet of senses is one example of how our senses will continue to evolve outside our physical bodies [122]. Intelligent networks powered by AI will make it feasible to provide reliable and affordable solutions for certain application cases. 6G networks are going to deliver realistic communication on the Internet of the senses, allowing complete telepresence and removing geographical barriers to connection [123]. Access to distant activities and experiences will be made possible by AI-powered personalized, realistic devices that can communicate accurately with the human body, enhancing human communications. 6G networks will also pave the way for novel methods of secure communication that prioritize user identification and access.
7.6 Extended Reality in Healthcare
The 6G communication network will greatly enhance the importance of the Internet of Health [124]. Additionally, it will include enhanced security measures that make it impossible for unauthorized parties to intercept or manipulate user-provided sensitive data [125]. Additionally, complicated situations involving healthcare have shown that teleoperation supported by extended reality (XR) may increase operational efficiency [126]. In the coming years, a 6G network connection is expected to be used by haptic XR robots, which will bring a new kind of traffic with specific QoS [127]. As a potential area of future study, AI-enabled solutions for cellularly connected virtual reality and unmanned aerial vehicle networks in XR-based remote surgery would be promising.
An extensive survey of current research on 6G wireless networks enabled by AI is presented in this article. This article reviews the state-of-the-art AI approaches for resource management in 6G networks, including Deep Learning, Federated Learning, Reinforcement Learning, and Evolutionary Learning. The paper offers an overview of the AI approaches to tackle technical issues related to user association, channel allocation, power allocation, and mode selection. The technical obstacles and performance indicators outlined in this paper will provide direction for developing distributed AI architecture to avoid numerous resource optimization concerns expected to affect 6G networks. This paper provides a conclusion highlighting the motivation and importance of using machine learning approaches to enhance the intelligent management of resources for the self-configuration of 6G networks. The open issues and potential solutions with future research direction and trends are highlighted in AI-enabled 6G wireless applications. Finally, to automate network operations and analyze massive data, these next research trends are expected to provide a new perspective on incorporating innovative AI approaches into the design guidelines of 6G wireless networks.
Acknowledgement: The authors would also like to thank the respected editor and reviewer for their support.
Funding Statement: This research was funded by Universiti Kebangsaan Malaysia, Fundamental Research Grant Scheme having Grant number FRGS/1/2023/ICT07/UKM/02/1, and Universiti Kebangsaan Malaysia Geran Universiti Penyelidikan having Grant number GUP-2024-009. The research was also supported by the Posts and Telecommunications Institute of Technology Research Grant 2024.
Author Contributions: The authors confirm contribution to the paper as follows: study conception and design: Kaharudin Dimyati, Mhd Nour Hindia, Effariza Binti Hanafi, Hayder Faeq Alhashimi; data collection: Mhd Nour Hindia, Hayder Faeq Alhashimi; analysis and interpretation of results: Hayder Faeq Alhashimi, Mhd Nour Hindia; draft manuscript preparation: Hayder Faeq Alhashimi, Feras Zen Alden, Effariza Binti Hanafi; review and editing: Faizan Qamar, Quang Ngoc Nguyen, Mhd Nour Hindia; funding acquisition: Faizan Qamar, Quang Ngoc Nguyen. All authors reviewed the results and approved the final version of the manuscript.
Availability of Data and Materials: Not applicable.
Ethics Approval: Not applicable.
Conflicts of Interest: The authors declare no conflicts of interest to report regarding the present study.
References
1. Qi Q, Chen X, Khalili A, Zhong C, Zhang Z, Ng DWK. Integrating sensing, computing, and communication in 6G wireless networks: design and optimization. IEEE Trans Commun. 2022;70(9):6212–27. doi:10.1109/TCOMM.2022.3190363. [Google Scholar] [CrossRef]
2. Pokhrel SR, Ding J, Park J, Park O-S, Choi J. Towards enabling critical mMTC: a review of URLLC within mMTC. IEEE Access. 2020;8:131796–813. doi:10.1109/ACCESS.2020.3010271. [Google Scholar] [CrossRef]
3. Guo S, Lu B, Wen M, Dang S, Saeed N. Customized 5G and beyond private networks with integrated URLLC, eMBB, mMTC, and positioning for industrial verticals. IEEE Commun Stand Magaz. 2022;6(1):52–7. doi:10.1109/MCOMSTD.0001.2100041. [Google Scholar] [CrossRef]
4. Nasralla MM, Khattak SBA, Ur Rehman I, Iqbal M. Exploring the role of 6G technology in enhancing quality of experience for m-health multimedia applications: a comprehensive survey. Sensors. 2023;23(13):5882. doi:10.3390/s23135882. [Google Scholar] [PubMed] [CrossRef]
5. Kumar R, Gupta SK, Wang H-C, Kumari CS, Korlam SSVP. From efficiency to sustainability: exploring the potential of 6G for a greener future. Sustainability. 2023;15(23):16387. doi:10.3390/su152316387. [Google Scholar] [CrossRef]
6. Vakili A, Al-Khafaji HMR, Darbandi M, Heidari A, Jafari Navimipour N, Unal M. A new service composition method in the cloud-based internet of things environment using a grey wolf optimization algorithm and MapReduce framework. Concurr Comput. 2024;36(16):e8091. doi:10.1002/cpe.8091. [Google Scholar] [CrossRef]
7. You KY. Survey on 5G and future 6G access networks for IoT applications. Int J Wirel Micro Technol. 2022;4(4):26–47. doi:10.5815/ijwmt.2022.04.03. [Google Scholar] [CrossRef]
8. Salameh AI, El Tarhuni M. From 5G to 6G—challenges, technologies, and applications. Fut Inter. 2022;14(4):117. doi:10.3390/fi14040117. [Google Scholar] [CrossRef]
9. Shafie A, Yang N, Han C, Jornet JM, Juntti M, Kürner T. Terahertz communications for 6G and beyond wireless networks: challenges, key advancements, and opportunities. IEEE Netw. 2022;37(3):162–9. doi:10.1109/MNET.118.2200057. [Google Scholar] [CrossRef]
10. Bhat JR, Alqahtani SA. 6G ecosystem: current status and future perspective. IEEE Access. 2021;9:43134–67. doi:10.1109/ACCESS.2021.3054833. [Google Scholar] [CrossRef]
11. Moubayed A, Shami A, Al-Dulaimi A. On end-to-end intelligent automation of 6G networks. Fut Inter. 2022;14(6):165. doi:10.3390/fi14060165. [Google Scholar] [CrossRef]
12. Asad M, Basit A, Qaisar S, Ali M. Beyond 5G: hybrid end-to-end quality of service provisioning in heterogeneous IoT networks. IEEE Access. 2020;8:192320–38. doi:10.1109/ACCESS.2020.3032704. [Google Scholar] [CrossRef]
13. Serôdio C, Cunha J, Candela G, Rodriguez S, Sousa XR, Branco F. The 6G ecosystem as support for IoE and private networks: vision, requirements, and challenges. Fut Inter. 2023;15(11):348. doi:10.3390/fi15110348. [Google Scholar] [CrossRef]
14. Ahmad HF, Rafique W, Rasool RU, Alhumam A, Anwar Z, Qadir J. Leveraging 6G, extended reality, and IoT big data analytics for healthcare: a review. Comput Sci Rev. 2023;48(3):100558. doi:10.1016/j.cosrev.2023.100558. [Google Scholar] [CrossRef]
15. Lee YL, Qin D, Wang L-C, Sim GH. 6G massive radio access networks: key applications, requirements and challenges. IEEE Open J Veh Technol. 2020;2:54–66. doi:10.1109/OJVT.2020.3044569. [Google Scholar] [CrossRef]
16. Ebrahimi S, Bouali F, Haas OC. Resource management from single-domain 5G to end-to-end 6G network slicing: a survey. IEEE Commun Surv Tutor. 2024;26(4):2836–66. doi:10.1109/COMST.2024.3390613. [Google Scholar] [CrossRef]
17. Alibraheemi AMH, Hindia MN, Dimyati K, Izam TFTMN, Yahaya J, Qamar F, et al. A survey of resource management in D2D communication for B5G networks. IEEE Access. 2023;11(3):7892–923. doi:10.1109/ACCESS.2023.3238799. [Google Scholar] [CrossRef]
18. Hussein NH, Yaw CT, Koh SP, Tiong SK, Chong KH. A comprehensive survey on vehicular networking: communications, applications, challenges, and upcoming research directions. IEEE Access. 2022;10(6):86127–80. doi:10.1109/ACCESS.2022.3198656. [Google Scholar] [CrossRef]
19. Ahad A, Tahir M, Aman Sheikh M, Ahmed KI, Mughees A, Numani A. Technologies trend towards 5G network for smart health-care using IoT: a review. Sensors. 2020;20(14):4047. doi:10.3390/s20144047. [Google Scholar] [PubMed] [CrossRef]
20. Masroor R, Naeem M, Ejaz W. Resource management in UAV-assisted wireless networks: an optimization perspective. Ad Hoc Netw. 2021;121(3):102596. doi:10.1016/j.adhoc.2021.102596. [Google Scholar] [CrossRef]
21. She C, Sun C, Gu Z, Li Y, Yang C, Poor HV, et al. A tutorial on ultrareliable and low-latency communications in 6G: integrating domain knowledge into deep learning. Proc IEEE. 2021;109(3):204–46. doi:10.1109/JPROC.2021.3053601. [Google Scholar] [CrossRef]
22. Alhashimi HF, Hindia MN, Dimyati K, Hanafi EB, Tengku Mohmed Noor Izam TF. Reinforcement learning based power allocation for 6G heterogenous networks. In: International Conference on Next Generation Wired/Wireless Networking; 2023; Dubai, United Arab Emirates. p. 128–41. [Google Scholar]
23. Zhang J, Liu C, Li X, Zhen H-L, Yuan M, Li Y, et al. A survey for solving mixed integer programming via machine learning. Neurocomputing. 2023;519(1):205–17. doi:10.1016/j.neucom.2022.11.024. [Google Scholar] [CrossRef]
24. Bariah L, Mohjazi L, Muhaidat S, Sofotasios PC, Kurt GK, Yanikomeroglu H, et al. A prospective look: key enabling technologies, applications and open research topics in 6G networks. IEEE Access. 2020;8:174792–820. doi:10.1109/ACCESS.2020.3019590. [Google Scholar] [CrossRef]
25. Farhad A, Pyun J-Y. Terahertz meets AI: the state of the art. Sensors. 2023;23(11):5034. doi:10.3390/s23115034. [Google Scholar] [PubMed] [CrossRef]
26. Aslam MM, Du L, Zhang X, Chen Y, Ahmed Z, Qureshi B. Sixth generation (6G) cognitive radio network (CRN) application, requirements, security issues, and key challenges. Wirel Commun Mob Comput. 2021;2021(1):1331428. doi:10.1155/2021/1331428. [Google Scholar] [CrossRef]
27. Mekrache A, Bradai A, Moulay E, Dawaliby S. Deep reinforcement learning techniques for vehicular networks: recent advances and future trends towards 6G. Veh Commun. 2022;33(2):100398. doi:10.1016/j.vehcom.2021.100398. [Google Scholar] [CrossRef]
28. Alhashimi HF, Hindia MN, Dimyati K, Hanafi EB, Izam TFTMN. Power allocation optimization based on multi-agents reinforcement learning for 6G cellular networks. In: 2024 Multimedia University Engineering Conference (MECON); 2024; Cyberjaya, Malaysia. p. 1–6. [Google Scholar]
29. Kazmi SHA, Qamar F, Hassan R, Nisar K. Routing-based interference mitigation in SDN enabled beyond 5G communication networks: a comprehensive survey. IEEE Access. 2023;11(4):4023–41. doi:10.1109/ACCESS.2023.3235366. [Google Scholar] [CrossRef]
30. Dehkordi IF, Manochehri K, Aghazarian V. Internet of things (IoT) intrusion detection by machine learning (MLa review. Asia-Pacific J Inform Technol Multim. 2023;12(1):13–38. doi:10.17576/apjitm-2023-1201-02. [Google Scholar] [CrossRef]
31. Pathak V, Pandya RJ, Bhatia V, Lopez OA. Qualitative survey on artificial intelligence integrated blockchain approach for 6G and beyond. IEEE Access. 2023;11:105935–81. doi:10.1109/ACCESS.2023.3319083. [Google Scholar] [CrossRef]
32. Noman HMF, Hanafi E, Noordin KA, Dimyati K, Hindia MN, Abdrabou A, et al. Machine learning empowered emerging wireless networks in 6G: recent advancements, challenges and future trends. IEEE Access. 2023;11:83017–51. doi:10.1109/ACCESS.2023.3302250. [Google Scholar] [CrossRef]
33. Khan NA, Schmid S. AI-RAN in 6G networks: state-of-the-art and challenges. IEEE Open J Commun Soc. 2023;5:294–311. doi:10.1109/OJCOMS.2023.3343069. [Google Scholar] [CrossRef]
34. Alhammadi A, Shayea I, El-Saleh AA, Azmi MH, Ismail ZH, Kouhalvandi L, et al. Artificial intelligence in 6G wireless networks: opportunities, applications, and challenges. Int J Intell Syst. 2024;2024(1):8845070. doi:10.1155/2024/8845070. [Google Scholar] [CrossRef]
35. Alhashimi HF, Hindia MN, Dimyati K, Hanafi EB, Safie N, Qamar F, et al. A survey on resource management for 6G heterogeneous networks: current research, future trends, and challenges. Electronics. 2023;12(3):647. doi:10.3390/electronics12030647. [Google Scholar] [CrossRef]
36. Hurtado Sánchez JA, Casilimas K, Caicedo Rendon OM. Deep reinforcement learning for resource management on network slicing: a survey. Sensors. 2022;22(8):3031. doi:10.3390/s22083031. [Google Scholar] [PubMed] [CrossRef]
37. Zhang L, Qamar F, Liaqat M, Nour Hindia M, Akram Zainol Ariffin K. Toward efficient 6G IoT networks: a perspective on resource optimization strategies, challenges, and future directions. IEEE Access. 2024;12(4):76606–33. doi:10.1109/ACCESS.2024.3405487. [Google Scholar] [CrossRef]
38. Lin M, Zhao Y. Artificial intelligence-empowered resource management for future wireless communications: a survey. China Commun. 2020;17(3):58–77. doi:10.23919/JCC.2020.03.006. [Google Scholar] [CrossRef]
39. Du J, Jiang C, Wang J, Ren Y, Debbah M. Machine learning for 6G wireless networks: carrying forward enhanced bandwidth, massive access, and ultrareliable/low-latency service. IEEE Vehicular Technol Mag. 2020;15(4):122–34. doi:10.1109/MVT.2020.3019650. [Google Scholar] [CrossRef]
40. Shi Y, Lian L, Shi Y, Wang Z, Zhou Y, Fu L, et al. Machine learning for large-scale optimization in 6G wireless networks. IEEE Commun Surv Tutor. 2023;25(4):2088–132. doi:10.1109/COMST.2023.3300664. [Google Scholar] [CrossRef]
41. Mahmood MR, Matin MA, Sarigiannidis P, Goudos SK. A comprehensive review on artificial intelligence/machine learning algorithms for empowering the future IoT toward 6G era. IEEE Access. 2022;10:87535–62. doi:10.1109/ACCESS.2022.3199689. [Google Scholar] [CrossRef]
42. Puspitasari AA, An TT, Alsharif MH, Lee BM. Emerging technologies for 6G communication networks: machine learning approaches. Sensors. 2023;23(18):7709. doi:10.3390/s23187709. [Google Scholar] [PubMed] [CrossRef]
43. Banafaa M, Shayea I, Din J, Azmi MH, Alashbi A, Daradkeh YI, et al. 6G mobile communication technology: requirements, targets, applications, challenges, advantages, and opportunities. Alex Eng J. 2023;64:245–74. doi:10.1016/j.aej.2022.08.017. [Google Scholar] [CrossRef]
44. Shen L-H, Feng K-T, Hanzo L. Five facets of 6G: research challenges and opportunities. ACM Comput Surv. 2023;55(11):1–39. doi:10.1145/3571072. [Google Scholar] [CrossRef]
45. Jawad AT, Maaloul R, Chaari L. A comprehensive survey on 6G and beyond: enabling technologies, opportunities of machine learning and challenges. Comput Netw. 2023;237(3):110085. doi:10.1016/j.comnet.2023.110085. [Google Scholar] [CrossRef]
46. Ibrahim SK, Singh MJ, Al-Bawri SS, Ibrahim HH, Islam MT, Islam MS, et al. Design, challenges and developments for 5G massive MIMO antenna systems at sub 6-GHz band: a review. Nanomaterials. 2023;13(3):520. doi:10.3390/nano13030520. [Google Scholar] [PubMed] [CrossRef]
47. Muntaha ST, Lazaridis PI, Hafeez M, Ahmed QZ, Khan FA, Zaharis ZD. Blockchain for dynamic spectrum access and network slicing: a review. IEEE Access. 2023;11:17922–44. doi:10.1109/ACCESS.2023.3243985. [Google Scholar] [CrossRef]
48. Alzubaidi OTH, Hindia MN, Dimyati K, Noordin KA, Wahab ANA, Qamar F, et al. Interference challenges and management in B5G network design: a comprehensive review. Electronics. 2022;11(18):2842. doi:10.3390/electronics11182842. [Google Scholar] [CrossRef]
49. Trevlakis SE, Boulogeorgos AA, Pliatsios D, Querol J, Ntontin K, Sarigiannidis P, et al. Localization as a key enabler of 6G wireless systems: a comprehensive survey and an outlook. IEEE Open J Commun Soc. 2023;4:2733–801. doi:10.1109/OJCOMS.2023.3324952. [Google Scholar] [CrossRef]
50. Alibraheemi AMH, Mohmed Noor Izam T, Hindia MN, Dimyati K. Mode selection and Q-learning based resource allocation for D2D communication networks. In: 2024 Multimedia University Engineering Conference (MECON); 2024 Jul 23–25; Cyberjaya, Malaysia. 2024. p. 1–6. doi:10.1109/MECON62796.2024.10776265. [Google Scholar] [CrossRef]
51. Hasan Alibraheemi AM, Tengku Mohmed Noor Izam TF, Hindia MN, Dimyati K. Multi agent Q-learning based resource allocation for relay-aided D2D enabled HetNets. In: 2024 Multimedia University Engineering Conference (MECON); 2024 Jul 23–25; Cyberjaya, Malaysia. 2024. p. 1–6. doi:10.1109/MECON62796.2024.10776209. [Google Scholar] [CrossRef]
52. Alzubaidi OTH, Nour Hindia M, Dimyati K, Noordin KA, Qamar F. Interference mitigation based on joint optimization of NTBS 3D positions and RIS reflection in downlink NOMA HetNets. IEEE Access. 2024;12(1):98750–67. doi:10.1109/ACCESS.2024.3410954. [Google Scholar] [CrossRef]
53. Chataut R, Nankya M, Akl R. 6G networks and the AI revolution—exploring technologies, applications, and emerging challenges. Sensors. 2024;24(6):1888. doi:10.3390/s24061888. [Google Scholar] [PubMed] [CrossRef]
54. Yazici İ, Shayea I, Din J. A survey of applications of artificial intelligence and machine learning in future mobile networks-enabled systems. Eng Sci Technol Int J. 2023;44:101455. doi:10.1016/j.jestch.2023.101455. [Google Scholar] [CrossRef]
55. Abubakar AI, Ahmad I, Omeke KG, Ozturk M, Ozturk C, Abdel-Salam AM, et al. A survey on energy optimization techniques in UAV-based cellular networks: from conventional to machine learning approaches. Drones. 2023;7(3):214. doi:10.3390/drones7030214. [Google Scholar] [CrossRef]
56. Tam P, Corrado R, Eang C, Kim S. Applicability of deep reinforcement learning for efficient federated learning in massive IoT communications. Appl Sci. 2023;13(5):3083. doi:10.3390/app13053083. [Google Scholar] [CrossRef]
57. Bai H, Cheng R, Jin Y. Evolutionary reinforcement learning: a survey. Intell Comput. 2023;2(8):25. doi:10.34133/icomputing.0025. [Google Scholar] [CrossRef]
58. Sharma H, Kumar N. Deep learning based physical layer security for terrestrial communications in 5G and beyond networks: a survey. Phys Commun. 2023;57:102002. doi:10.1016/j.phycom.2023.102002. [Google Scholar] [CrossRef]
59. Taye MM. Understanding of machine learning with deep learning: architectures, workflow, applications and future directions. Computers. 2023;12(5):91. doi:10.3390/computers12050091. [Google Scholar] [CrossRef]
60. Amiri Z, Heidari A, Navimipour NJ, Esmaeilpour M, Yazdani Y. The deep learning applications in IoT-based bio-and medical informatics: a systematic literature review. Neural Comput Appl. 2024;36(11):5757–97. doi:10.1007/s00521-023-09366-3. [Google Scholar] [CrossRef]
61. Zhang Y, Xiong L, Yu J. Deep learning based user association in heterogeneous wireless networks. IEEE Access. 2020;8:197439–47. doi:10.1109/ACCESS.2020.3033133. [Google Scholar] [CrossRef]
62. Zhang H, Zhang H, Long K, Karagiannidis GK. Deep learning based radio resource management in NOMA networks: user association, subchannel and power allocation. IEEE Trans Netw Sci Eng. 2020;7(4):2406–15. doi:10.1109/TNSE.2020.3004333. [Google Scholar] [CrossRef]
63. Yu P, Zhou F, Zhang X, Qiu X, Kadoch M, Cheriet M. Deep learning-based resource allocation for 5G broadband TV service. IEEE Trans Broadcast. 2020;66(4):800–13. doi:10.1109/TBC.2020.2968730. [Google Scholar] [CrossRef]
64. Van Chien T, Canh TN, Björnson E, Larsson EG. Power control in cellular massive MIMO with varying user activity: a deep learning solution. IEEE Trans Wirel Commun. 2020;19(9):5732–48. doi:10.1109/TWC.2020.2996368. [Google Scholar] [CrossRef]
65. Huang ZH, Hsu YL, Chang PK, Tsai MJ. Efficient handover algorithm in 5G networks using deep learning. In: GLOBECOM, 2020-2020 IEEE Global Communications Conference; 2020 Dec 7–11; Taipei, China. 2020. p. 1–6. doi:10.1109/globecom42002.2020.9322618. [Google Scholar] [CrossRef]
66. Fathy M, Abood MS, Hamdi MM. Optimization of energy-efficient cloud radio access networks for 5G using neural networks. In: 2021 International Conference on Intelligent Technology, System and Service for Internet of Everything (ITSS-IoE); 2021 Nov 1–2; Sana’a, Yemen. 2021. p. 1–6. doi:10.1109/itss-ioe53029.2021.9615290. [Google Scholar] [CrossRef]
67. Alablani IA, Arafah MA. Enhancing 5G small cell selection: a neural network and IoV-based approach. Sensors. 2021;21(19):6361. doi:10.3390/s21196361. [Google Scholar] [PubMed] [CrossRef]
68. Kim D, Kwon S, Jung H, Lee IH. Deep learning-based resource allocation scheme for heterogeneous NOMA networks. IEEE Access. 2023;11:89423–32. doi:10.1109/ACCESS.2023.3307407. [Google Scholar] [CrossRef]
69. Jang J, Yang HJ. Recurrent neural network-based user association and power control in dynamic HetNets. IEEE Trans Vehicular Technol. 2022;71(9):9674–89. doi:10.1109/TVT.2022.3181207. [Google Scholar] [CrossRef]
70. Zhang Y, Yang J, Liu Q, Liu Y, Zhang T. Unsupervised learning-based coordinated hybrid precoding for MmWave massive MIMO-enabled HetNets. IEEE Trans Wirel Commun. 2024;23(7):7200–13. doi:10.1109/TWC.2023.3338481. [Google Scholar] [CrossRef]
71. Purushothaman KE, Nagarajan V. Evolutionary multi-objective optimization algorithm for resource allocation using deep neural network in 5G multi-user massive MIMO. Int J Electron. 2020;108(7):1214–33. doi:10.1080/00207217.2020.1843715. [Google Scholar] [CrossRef]
72. Labana M, Hamouda W. Unsupervised deep learning approach for near optimal power allocation in CRAN. IEEE Trans Vehicular Technol. 2021;70(7):7059–70. doi:10.1109/TVT.2021.3082776. [Google Scholar] [CrossRef]
73. Zhao H, Xu B, Huang H, Wang Q, Zhu C, Gui G. Energy efficient power allocation for ultra-reliable and low-latency communications via unsupervised learning. IET Commun. 2023;17(9):1048–58. doi:10.1049/cmu2.12605. [Google Scholar] [CrossRef]
74. Sirohi D, Kumar N, Rana PS, Tanwar S, Iqbal R, Hijjii M. Federated learning for 6G-enabled secure communication systems: a comprehensive survey. Artif Intell Rev. 2023;56(10):11297–389. doi:10.1007/s10462-023-10417-3. [Google Scholar] [PubMed] [CrossRef]
75. Duan Q, Huang J, Hu S, Deng R, Lu Z, Yu S. Combining federated learning and edge computing toward ubiquitous intelligence in 6G network: challenges, recent advances, and future directions. IEEE Commun Surv Tutor. 2023;25(4):2892–950. doi:10.1109/COMST.2023.3316615. [Google Scholar] [CrossRef]
76. Baganal-Krishna N, Lübben R, Liotou E, Katsaros KV, Rizk A. A federated learning approach to QoS forecasting in cellular vehicular communications: approaches and empirical evidence. Comput Netw. 2024;242:110239. doi:10.1016/j.comnet.2024.110239. [Google Scholar] [CrossRef]
77. Le THT, Tran NH, Tun YK, Nguyen MN, Pandey SR, Han Z, et al. An incentive mechanism for federated learning in wireless cellular networks: an auction approach. IEEE Trans Wirel Commun. 2021;20(8):4874–87. doi:10.1109/TWC.2021.3062708. [Google Scholar] [CrossRef]
78. Salehi M, Hossain E. Federated learning in unreliable and resource-constrained cellular wireless networks. IEEE Trans Commun. 2021;69(8):5136–51. doi:10.1109/TCOMM.2021.3081746. [Google Scholar] [CrossRef]
79. Song Y, Ji G, Dai M, Wu Y, Qian L, Lin B. Joint resource allocation and scheduling for wireless power transfer aided federated learning. In: 2022 31st Wireless and Optical Communications Conference (WOCC); 2022 Aug 11–12; Shenzhen, China. 2022. p. 155–60. doi:10.1109/WOCC55104.2022.9880578. [Google Scholar] [CrossRef]
80. He Y, Yang M, He Z, Guizani M. Resource allocation based on digital twin-enabled federated learning framework in heterogeneous cellular network. IEEE Trans Vehicular Technol. 2022;72(1):1149–58. doi:10.1109/TVT.2022.3205778. [Google Scholar] [CrossRef]
81. Ji Z, Qin Z. Federated learning for distributed energy-efficient resource allocation. In: ICC, 2022-IEEE International Conference on Communications; 2022 May 16–20; Seoul, Republic of Korea. 2022. p. 1–6. doi:10.1109/ICC45855.2022.9882281. [Google Scholar] [CrossRef]
82. Nguyen SC, Hoang M, Vo TP, Dang DNM. Efficient resource allocation using federated learning in cellular networks. In: Proceedings of the 3rd ACM Workshop on Intelligent Cross-Data Analysis and Retrieval; 2022; Newark, NJ, USA. p. 70–3. doi:10.1145/3512731.3534214. [Google Scholar] [CrossRef]
83. Bartsiokas IA, Gkonis PK, Kaklamani DI, Venieris IS. A federated learning-based resource allocation scheme for relaying-assisted communications in multicellular next generation network topologies. Electronics. 2024;13(2):390. doi:10.3390/electronics13020390. [Google Scholar] [CrossRef]
84. Li X, Lu L, Ni W, Jamalipour A, Zhang D, Du H. Federated multi-agent deep reinforcement learning for resource allocation of vehicle-to-vehicle communications. IEEE Trans Vehicular Technol. 2022;71(8):8810–24. doi:10.1109/TVT.2022.3173057. [Google Scholar] [CrossRef]
85. Guo Q, Tang F, Kato N. Federated reinforcement learning-based resource allocation in D2D-enabled 6G. IEEE Netw. 2022;37(5):89–95. doi:10.1109/MNET.122.2200102. [Google Scholar] [CrossRef]
86. Ji Z, Qin Z, Tao X. Meta federated reinforcement learning for distributed resource allocation. IEEE Trans Wirel Commun. 2024;23(7):7865–76. doi:10.1109/TWC.2023.3345363. [Google Scholar] [CrossRef]
87. Nguyen TT, Nguyen ND, Nahavandi S. Deep reinforcement learning for multiagent systems: a review of challenges, solutions, and applications. IEEE Trans Cybern. 2020;50(9):3826–39. doi:10.1109/TCYB.2020.2977374. [Google Scholar] [PubMed] [CrossRef]
88. Dinata NFP, Ramli MAM, Jambak MI, Sidik MAB, Alqahtani MM. Designing an optimal microgrid control system using deep reinforcement learning: a systematic review. Eng Sci Technol Int J. 2024;51:101651. doi:10.1016/j.jestch.2024.101651. [Google Scholar] [CrossRef]
89. Yu S, Lee JW. Deep reinforcement learning based resource allocation for D2D communications underlay cellular networks. Sensors. 2022;22(23):9459. doi:10.3390/s22239459. [Google Scholar] [PubMed] [CrossRef]
90. Zhang C, Lv T, Huang P, Lin Z, Zeng J, Ren Y. Joint optimization of bandwidth and power allocation in uplink systems with deep reinforcement learning. Sensors. 2023;23(15):6822. doi:10.3390/s23156822. [Google Scholar] [PubMed] [CrossRef]
91. Zhang S, Bao S, Chi K, Yu K, Mumtaz S. DRL-based computation rate maximization for wireless powered multi-AP edge computing. IEEE Trans Commun. 2024;72(2):1105–18. doi:10.1109/TCOMM.2023.3325905. [Google Scholar] [CrossRef]
92. Jeong YJ, Yu S, Lee JW. DRL-based resource allocation for NOMA-enabled D2D communications underlay cellular networks. IEEE Access. 2023;11:140270–86. doi:10.1109/ACCESS.2023.3341585. [Google Scholar] [CrossRef]
93. Xie J, Li L, Li C. A joint resource optimization allocation algorithm for NOMA-D2D communication. IET Commun. 2024;18(6):398–408. doi:10.1049/cmu2.12741. [Google Scholar] [CrossRef]
94. Kaur A, Kumar K, Prakash A, Tripathi R. Imperfect CSI-based resource management in cognitive IoT networks: a deep recurrent reinforcement learning framework. IEEE Trans Cogn Commun Netw. 2023;9(5):1271–81. [Google Scholar]
95. Naderializadeh N, Sydir JJ, Simsek M, Nikopour H. Resource management in wireless networks via multi-agent deep reinforcement learning. IEEE Trans Wirel Commun. 2021;20(6):3507–23. doi:10.1109/TWC.2021.3051163. [Google Scholar] [CrossRef]
96. Lee S-H, Shi X-P, Tan T-H, Lee Y-L, Huang Y-F. Performance of Q-learning based resource allocation for D2D communications in heterogeneous networks. ICT Express. 2023;9(6):1032–9. doi:10.1016/j.icte.2023.02.003. [Google Scholar] [CrossRef]
97. Jiang F, Zhang L, Sun C, Yuan Z. Clustering and resource allocation strategy for D2D multicast networks with machine learning approaches. China Commun. 2021;18(1):196–211. doi:10.23919/JCC.2021.01.017. [Google Scholar] [CrossRef]
98. Zhang S, Wang X, Shi Z, Liu J. Reinforcement learning based RSS-threshold optimization for D2D-aided HTC/MTC in dense NOMA systems. IEEE Trans Wirel Commun. 2023;22(10):6489–503. doi:10.1109/TWC.2023.3244192. [Google Scholar] [CrossRef]
99. Omidkar A, Khalili A, Nguyen HH, Shafiei H. Reinforcement-learning-based resource allocation for energy-harvesting-aided D2D communications in IoT networks. IEEE Internet Things J. 2022;9(17):16521–31. doi:10.1109/JIOT.2022.3151001. [Google Scholar] [CrossRef]
100. Wang X, Jin T, Hu L, Qian Z. Energy-efficient power allocation and Q-learning-based relay selection for relay-aided D2D communication. IEEE Trans Vehicular Technol. 2020;69(6):6452–62. doi:10.1109/TVT.2020.2985873. [Google Scholar] [CrossRef]
101. Abasi AK, Aloqaily M, Guizani M, Ouni B. Metaheuristic algorithms for 6G wireless communications: recent advances and applications. Ad Hoc Netw. 2024;158(4):103474. doi:10.1016/j.adhoc.2024.103474. [Google Scholar] [CrossRef]
102. Nawaz SJ, Sharma SK, Wyne S, Patwary MN, Asaduzzaman M. Quantum machine learning for 6G communication networks: state-of-the-art and vision for the future. IEEE Access. 2019;7:46317–50. doi:10.1109/ACCESS.2019.2909490. [Google Scholar] [CrossRef]
103. Gachhadar A, Maharjan RK, Shrestha S, Adhikari NB, Qamar F, Kazmi SHA, et al. Power optimization in multi-tier heterogeneous networks using genetic algorithm. Electronics. 2023;12(8):1795. doi:10.3390/electronics12081795. [Google Scholar] [CrossRef]
104. Fayad A, Cinkler T. Energy-efficient joint user and power allocation in 5G millimeter wave networks: a genetic algorithm-based approach. IEEE Access. 2024;12:20019–30. doi:10.1109/ACCESS.2024.3361660. [Google Scholar] [CrossRef]
105. Madapatha C, Makki B, Muhammad A, Dahlman E, Alouini M-S, Svensson T. On topology optimization and routing in integrated access and backhaul networks: a genetic algorithm-based approach. IEEE Open J Commun Soc. 2021;2:2273–91. doi:10.1109/OJCOMS.2021.3114669. [Google Scholar] [CrossRef]
106. Goutham N, Mishra PK. An efficient QGA-based model for resource allocation in D2D communication for 5G-HCRAN networks. IETE J Res. 2024;70(4):3347–57. doi:10.1080/03772063.2023.2197404. [Google Scholar] [CrossRef]
107. Hasan MK, Islam S, Gadekallu TR, Ismail AF, Amanlou S, Abdullah SNHS. Novel EBBDSA based resource allocation technique for interference mitigation in 5G heterogeneous network. Comput Commun. 2023;209:320–30. doi:10.1016/j.comcom.2023.07.012. [Google Scholar] [CrossRef]
108. Osama M, El Ramly S, Abdelhamid B. Binary PSO with classification trees algorithm for enhancing power efficiency in 5G networks. Sensors. 2022;22(21):8570. doi:10.3390/s22218570. [Google Scholar] [PubMed] [CrossRef]
109. Benni NS, Manvi SS. Modified PSO based channel allocation scheme for interference management in 5G wireless mesh networks. J Telecommun Inform Technol. 2022;8(2):1–13. doi:10.26636/jtit.2022.156621. [Google Scholar] [CrossRef]
110. Purushothaman KE, Nagarajan V. Multiobjective optimization based on self-organizing Particle Swarm Optimization algorithm for massive MIMO 5G wireless network. Int J Commun Syst. 2021;34(4):e4725. doi:10.1002/dac.4725. [Google Scholar] [CrossRef]
111. Nisha Rani S, Indumathi G. Chicken swarm optimization based optimal channel allocation in massive MIMO. Wirel Pers Commun. 2023;129(3):2055–77. doi:10.1007/s11277-023-10225-6. [Google Scholar] [CrossRef]
112. Khan BS, Jangsher S, Hussain N, Arafah MA. Artificial neural network-based joint mobile relay selection and resource allocation for cooperative communication in heterogeneous network. IEEE Syst J. 2022;16(4):5809–20. doi:10.1109/JSYST.2022.3179351. [Google Scholar] [CrossRef]
113. Xu J, Guo C, Zhang H. Joint channel allocation and power control based on PSO for cellular networks with D2D communications. Comput Netw. 2018;133(C):104–19. doi:10.1016/j.comnet.2018.01.017. [Google Scholar] [CrossRef]
114. Coronado E, Behravesh R, Subramanya T, Fernàndez-Fernàndez A, Siddiqui MS, Costa-Pérez X, et al. Zero touch management: a survey of network automation solutions for 5G and 6G networks. IEEE Commun Surv Tutor. 2022;24(4):2535–78. doi:10.1109/COMST.2022.3212586. [Google Scholar] [CrossRef]
115. Wang C-X, You X, Gao X, Zhu X, Li Z, Zhang C, et al. On the road to 6G: visions, requirements, key technologies, and testbeds. IEEE Commun Surv Tutor. 2023;25(2):905–74. doi:10.1109/COMST.2023.3249835. [Google Scholar] [CrossRef]
116. Mishra P, Singh G. Energy management systems in sustainable smart cities based on the internet of energy: a technical review. Energies. 2023;16(19):6903. doi:10.3390/en16196903. [Google Scholar] [CrossRef]
117. Salahdine F, Han T, Zhang N. 5G, 6G, and beyond: recent advances and future challenges. Annals Telecommun. 2023;78(9):525–49. doi:10.1007/s12243-022-00938-3. [Google Scholar] [CrossRef]
118. Wenhua Z, Qamar F, Abdali T-AN, Hassan R, Jafri STA, Nguyen QN. Blockchain technology: security issues, healthcare applications, challenges and future trends. Electronics. 2023;12(3):546. doi:10.3390/electronics12030546. [Google Scholar] [CrossRef]
119. Yap KY, Chin HH, Klemeš JJ. Blockchain technology for distributed generation: a review of current development, challenges and future prospect. Renew Sustain Energ Rev. 2023;175(2):113170. doi:10.1016/j.rser.2023.113170. [Google Scholar] [CrossRef]
120. Abd Elaziz M, Al-qaness MA, Dahou A, Alsamhi SH, Abualigah L, Ibrahim RA, et al. Evolution toward intelligent communications: impact of deep learning applications on the future of 6G technology. Wiley Interdiscy Rev: Data Min Knowl Disc. 2024;14(1):e1521. doi:10.1002/widm.1521. [Google Scholar] [CrossRef]
121. da Costa DB, Zhao Q, Chafii M, Bader F, Debbah M. 6G: vision, applications, and challenges. In: Fundamentals of 6G communications and networking. Cham, Switzerland: Springer; 2023. p. 15–69. [Google Scholar]
122. Cömert K, Akkaş M. Internet of senses-potential applications and implications. J Soft Computi Artif Intell. 2023;4(2):48–54. doi:10.55195/jscai.1316512. [Google Scholar] [CrossRef]
123. Prateek K, Ojha NK, Altaf F, Maity S. Quantum secured 6G technology-based applications in Internet of Everything. Telecommun Syst. 2023;82(2):315–44. doi:10.1007/s11235-022-00979-y. [Google Scholar] [CrossRef]
124. Dao N-N. Internet of wearable things: advancements and benefits from 6G technologies. Future Gener Comput Syst. 2023;138(7):172–84. doi:10.1016/j.future.2022.07.006. [Google Scholar] [CrossRef]
125. Ali Kazmi SH, Qamar F, Hassan R, Nisar K, Dahnil DPB, Al-Betar MA. Threat intelligence with non-IID data in federated learning enabled intrusion detection for SDN: an experimental study. In: 2023 24th International Arab Conference on Information Technology (ACIT); 2023 Dec 6–8; Ajman, United Arab Emirates. 2023. p. 1–6. doi:10.1109/ACIT58888.2023.10453867. [Google Scholar] [CrossRef]
126. Su Y-P, Chen X-Q, Zhou C, Pearson LH, Pretty CG, Chase JG. Integrating virtual, mixed, and augmented reality into remote robotic applications: a brief review of extended reality-enhanced Robotic systems for Intuitive Telemanipulation and Telemanufacturing tasks in Hazardous conditions. Appl Sci. 2023;13(22):12129. doi:10.3390/app132212129. [Google Scholar] [CrossRef]
127. Kamath S, Anand S, Buchke S, Agnihotri K. A review of recent developments in 6G communications systems. Eng Proc. 2024;59(1):167. doi:10.3390/engproc2023059167. [Google Scholar] [CrossRef]
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools