Open Access
REVIEW
A Comprehensive Survey on AI-Assisted Multiple Access Enablers for 6G and beyond Wireless Networks
1 Office of Research Innovation and Commercialization, University of Management and Technology, Lahore, 54770, Pakistan
2 Department of Electrical and Electronics Engineering, Faculty of Engineering, University of Lagos, Akoka, Lagos, 100213, Nigeria
3 Electrical and Electronic Engineering Department, School of Science and Technology, Pan-Atlantic University, Ibeju-Lekki, Lagos, 105101, Nigeria
4 Department of Library and Information Science, Fu Jen Catholic University, New Taipei City, 242062, Taiwan
5 Department of Computer Science and Information Engineering, Asia University, Taichung City, 413305, Taiwan
* Corresponding Authors: Agbotiname Lucky Imoize. Email: ; Cheng-Chi Lee. Email:
(This article belongs to the Special Issue: Artificial Intelligence for 6G Wireless Networks)
Computer Modeling in Engineering & Sciences 2025, 145(2), 1575-1664. https://doi.org/10.32604/cmes.2025.073200
Received 12 September 2025; Accepted 24 October 2025; Issue published 26 November 2025
Abstract
The envisioned 6G wireless networks demand advanced Multiple Access (MA) schemes capable of supporting ultra-low latency, massive connectivity, high spectral efficiency, and energy efficiency (EE), especially as the current 5G networks have not achieved the promised 5G goals, including the projected 2000 times EE improvement over the legacy 4G Long Term Evolution (LTE) networks. This paper provides a comprehensive survey of Artificial Intelligence (AI)-enabled MA techniques, emphasizing their roles in Spectrum Sensing (SS), Dynamic Resource Allocation (DRA), user scheduling, interference mitigation, and protocol adaptation. In particular, we systematically analyze the progression of traditional and modern MA schemes, from Orthogonal Multiple Access (OMA)-based approaches like Time Division Multiple Access (TDMA) and Frequency Division Multiple Access (FDMA) to advanced Non-Orthogonal Multiple Access (NOMA) methods, including power domain-NOMA, Sparse Code Multiple Access (SCMA), and Rate Splitting Multiple Access (RSMA). The study further categorizes AI techniques—such as Machine Learning (ML), Deep Learning (DL), Reinforcement Learning (RL), Federated Learning (FL), and Explainable AI (XAI)—and maps them to practical challenges in Dynamic Spectrum Management (DSM), protocol optimization, and real-time distributed decision-making. Optimization strategies, including metaheuristics and multi-agent learning frameworks, are reviewed to illustrate the potential of AI in enhancing energy efficiency, system responsiveness, and cross-layer RA. Additionally, the review addresses security, privacy, and trust concerns, highlighting solutions like privacy-preserving ML, FL, and XAI in 6G and beyond. By identifying research gaps, challenges, and future directions, this work offers a structured resource for researchers and practitioners aiming to integrate AI into 6G MA systems for intelligent, scalable, and secure wireless communications.Graphic Abstract
Keywords
The sixth generation (6G) wireless communication systems represent a transformative leap beyond 5G, aiming to deliver ultra-high data rates (up to 1 Terabit-per-second (Tbps)), massive device connectivity (up to 107 devices/km2), ultra-low latency (10–100 µs), and support for high mobility (up to 1000 km/h) [1]. These capabilities are designed to support advanced applications, including immersive Extended Reality (XR) and tactile internet, autonomous systems, and even space tourism [2]. A defining feature of 6G is its native integration with Artificial Intelligence (AI), enabling intelligent, autonomous, and adaptive networking. Technologies such as Machine Learning (ML), Deep Learning (DL), and Natural Language Processing (NLP) will enhance real-time decision-making, predictive maintenance, and DRA [3]. To support this, 6G will utilize a broad-spectrum range—from sub-6 GHz and mmWave (28/39/60 GHz) to THz bands (above 100 GHz)—as well as non-Radio Frequency (RF) domains, such as Visible Light Communication (VLC) and quantum channels. However, efficient spectrum utilization remains a significant challenge [4]. Cloud-native and edge computing architectures will play a central role in 6G by reducing hardware dependency and enabling distributed real-time processing and network function virtualization. AI-driven functions, such as intelligent network slicing, will facilitate support for diverse and heterogeneous applications, including smart cities, Industry 4.0, and Internet of Things (IoT) ecosystems [5].
With the growing complexity and scale of 6G, security and privacy have become critical. AI-based Intrusion Detection System (IDS), Anomaly Detection (AD), and blockchain-enabled, decentralized solutions are being developed to address emerging threats posed by expanded device ecosystems and dynamic architectures [6]. Still, these solutions present challenges, including ethical data handling, computational costs, and uncertainty in AI. Federated Learning (FL) and quantum AI are gaining traction to address scalability, privacy, and adaptability issues in distributed AI systems [7]. The convergence of AI and 6G is expected to redefine connectivity and enable novel digital-physical experiences that align with the goals of 2030–2040 [8].
The telecommunications industry is undergoing rapid decentralization, driven by the need for intelligent, high-speed networks capable of supporting massive machine-type communication (mMTC). Technologies such as Software-Defined Networking (SDN), virtualization, and heterogeneous architectures are being adopted, although they introduce new challenges in integration and standardization [9]. 6G is seen as the next significant advancement, leveraging Edge AI to manage intelligent systems and enable next-generation applications such as Autonomous Vehicles (AVs), smart environments, and the Internet of Everything [10].
A significant hurdle for 6G is spectrum management. Traditional static allocation methods, regulated by agencies such as the FCC (U.S.) and the National Radio Administration (China), have led to severe underutilization. In some high-demand bands (hundreds of MHz to 3 GHz), utilization is reportedly as low as 5% across time and location [11]. To address this inefficiency, Dynamic Spectrum Access (DSA) and intelligent sharing models are essential. One promising approach is Cognitive Radio (CR), introduced by Joseph Mitola in 1999, which enables devices to sense, analyze, and adapt to their spectral environment [12]. The CR process comprises four stages: SS, analysis, decision-making, and reconstruction, with SS being the most critical for identifying unused frequencies in real-time. CR is widely used in communications, Unmanned Aerial Vehicles (UAVs), radar, and transportation systems for tasks such as anti-jamming and obstacle avoidance. AI-enhanced SS has emerged as a key development. ML, DL, game theory, and optimization algorithms have been used to enhance sensing performance in dynamic environments [13].
Deep learning techniques, particularly Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks, are highly effective in modeling nonlinear spectral characteristics and enhancing detection accuracy and speed. However, most existing DL-based SS systems are non-cooperative, making them vulnerable to limited data and lower robustness. Cooperative SS, in which multiple devices collaborate, has shown promise in improving accuracy and real-time adaptability. For example, Ref. [14] compared CNN architectures, such as LeNet, AlexNet, and VGG-16, on 2D signal data under various fusion rules. The research [15] implemented federated learning in cooperative SS, achieving a 98.78% detection rate at a −15 dB Signal-to-Noise Ratio (SNR) with only 1% false positives.
Simultaneously, Multiple Access (MA) techniques face growing limitations. Conventional schemes—such as Orthogonal Multiple Access (OMA), Space Division Multiple Access (SDMA), Nonorthogonal Multiple Access (NOMA), and Distributed Coordination Function (DCF)-based access struggle to meet 6G’s extreme requirements for latency, reliability, connectivity, and Energy Efficiency (EE). OMA suffers from low spectral efficiency; SDMA is impractical in dense urban or indoor environments due to high antenna complexity; NOMA increases system overhead; and random-access schemes like DCF are prone to collisions [16]. As billions of smart and bandwidth-intensive devices connect to 6G networks, emerging use cases such as smart cities, XR, AVs, and Industry 4.0 are placing immense demands on spectrum, power, and latency. To meet these, AI-driven MA techniques are being developed. AI-empowered next-generation MA solutions enable real-time learning and decision-making, addressing limitations of conventional MA while improving Quality of Service (QoS) for latency-sensitive and bandwidth-intensive applications.
This review aims to gain a deeper understanding of the approach to systematically incorporating the concept of AI in formulating and optimizing MA schemes for 6G and beyond wireless networks. This review outlines the motivations for adopting AI-based solutions, categorizes ongoing research trends, and assesses the effectiveness of specific approaches across various performance indicators to achieve the envisioned goals of 6G and future wireless networks. To be more precise, our interest lies in how AI will enable SS, DRA, user scheduling and management, interference mitigation, and protocol adaptation. The key contributions of the paper are outlined as follows:
• This paper provides a thorough and structured survey of AI-enabled MA techniques aimed at advancing 6G wireless networks. Specifically, the paper emphasizes critical elements of this evolution, including SS, intelligent protocol design, and optimization frameworks, which are essential for meeting the stringent requirements of 6G, such as ultra-low latency, massive device connectivity, and high spectral efficiency. The review begins by examining both fundamental and modern MA schemes, starting from OMA, including Time Division Multiple Access (TDMA) and Frequency Division Multiple Access (FDMA), to more sophisticated NOMA techniques such as power domain-NOMA, Sparse Code Multiple Access (SCMA), and Rate Splitting Multiple Access (RSMA). By analyzing their progression, strengths, and limitations, the paper offers valuable insights into how MA schemes must evolve to support next-generation wireless services and systems.
• The paper covers a broad description of different types of learning in AI categorization, including Supervised Learning (SL), Unsupervised Learning (USL), and Reinforcement Learning (RL), which can be applied in MA for enabling future 6G wireless networks. Moreover, the newly established studies focusing on AI-based SS and MA protocol design have been thoroughly examined, highlighting critical deficiencies and research gaps in existing work.
• One of its significant contributions is the application of a broad range of AI methods (such as ML, DL, RL, FL, and Explainable AI (XAI)) to a set of practical problems in DSM and smart-protocol design. The paper focuses on core optimization approaches using metaheuristic and multi-objective frameworks, providing insights into how AI can enhance real-time, adaptive, and distributed decision-making in future wireless systems. Specifically, one of the key contributions of this work lies in its detailed mapping of AI techniques—including ML, DL, RL, FL, and XAI—to practical challenges in DSM and adaptive protocol development. This structured classification is particularly novel for its connection of specific AI models to roles in prediction, allocation, control, and optimization within complex, evolving 6G environments. Additionally, the paper offers a layered understanding of how these techniques can be integrated into spectrum-aware access mechanisms to make them more flexible, intelligent, and scalable.
• The survey further delves into AI-based optimization strategies, highlighting metaheuristic methods like Genetic Algorithms (GA), Particle Swarm Optimization (PSO), and Ant Colony Optimization, alongside Deep Reinforcement Learning (DRL) and Multi-Agent Reinforcement Learning (MARL). These approaches are explored in the context of real-time, distributed decision-making, showcasing how AI can enhance cross-layer Resource Allocation (RA), energy efficiency, and system responsiveness. By presenting this comprehensive view, the paper provides a foundation for designing adaptive, low-latency, and resource-optimized 6G MA frameworks. Additionally, we mapped the enabling technologies with the most appropriate computational models and application scenarios in 6G networks.
• Another critical and often overlooked aspect addressed in this work is the role of AI in enhancing security, privacy, and trust in access control. This survey identifies potential vulnerabilities, including adversarial attacks and opaque AI decision-making. It presents possible solutions like privacy-preserving ML, FL, and XAI to ensure trustworthy and transparent systems. The paper also outlines open research challenges, including the lack of empirical validation, data scarcity, hardware constraints, and the need for standardized and ethical deployment of AI in wireless communication systems. Overall, the present work can serve as a baseline resource for both researchers and practitioners who need to leverage AI to deliver the Next Generation (XG) of wireless networks with MA, securely, intelligently, and at large scale.
The rest of the paper is outlined as follows. Section 2 provides a general description of various access technologies and their development in 6G. Section 3 describes the research methodology (PRISMA-Consistent Format). Section 4 introduces the fundamental concepts of 6G and MA technologies, including both traditional and modern schemes, and discusses their limitations in the 6G context. Section 5 presents a detailed survey of AI techniques—ML, DL, RL, and FL—relevant to wireless communication. Section 6 explores the application of AI in SS, including prediction models, sensing strategies, and DSA. Section 7 focuses on AI-driven protocol design, addressing intelligent Medium Access Control (MAC) scheduling, Resource Management (RM), and beamforming strategies. Additionally, Section 8 reviews AI-powered optimization techniques, including metaheuristics and multi-objective frameworks. Section 9 discusses emerging security and privacy challenges in AI-empowered MA systems. Section 10 identifies open research challenges and future directions, while summarizing key lessons learned and recent trends. Finally, Section 11 concludes the paper with a reflection on the transformative role of AI in shaping 6G MA systems.
The vision for 6G connectivity extends far beyond conventional communication paradigms, aiming to seamlessly integrate the digital, physical, and biological realms [17]. In the future network, end devices will no longer act as isolated units but will function as coordinated clusters that serve as man–machine interfaces. This transformation will enable ubiquitous computing across both edge and cloud infrastructures, facilitate knowledge systems that convert raw data into actionable intelligence, and incorporate advanced sensing and actuation capabilities to interact with and control the physical world. A key challenge in realizing this vision is efficiently placing services in Mobile Edge Computing (MEC) systems. Several studies have addressed this issue by optimizing various aspects of MEC operations. For example, the study [18] proposed a joint optimization model for service placement and edge server deployment, aiming to maximize the cumulative profit of edge servers while accounting for storage and computational constraints. Similarly, the work [19] applied Q-learning, an RL technique, to jointly manage task offloading and RA in environments characterized by uncertain computational demands and strict delay constraints.
Further innovations include an offloading framework proposed in [20], which reduces task delay while balancing load across servers, thereby enabling collaborative task execution between user devices and edge nodes. In addition, the work [21] focused on minimizing global energy consumption by optimizing offloading ratios and computing RA under strict delay constraints. However, many of these approaches assume that necessary services are pre-deployed on edge servers. This assumption is increasingly unrealistic in AI-driven systems, where models must adapt dynamically to changing data distributions. To address this, the authors [22] proposed a strategy for broadcasting updated AI models to user devices, optimizing both service placement and RA to minimize energy and computation time. Complementing this, the study [23] developed a collaborative AI training framework in which multiple end users, coordinated by an edge server, work together to reduce latency and energy consumption. While these solutions focus on delay and EE, inference accuracy remains a critical and often overlooked metric. The current body of research begins to address this by proposing AI service placement strategies that balance inference quality with system performance.
Incentive mechanisms have also become vital for promoting data offloading in edge systems. For example, Ref. [24] introduced a reward-based scheme using learning algorithms to manage resources and encourage node participation. However, these approaches often lack formal guarantees. To overcome this, the current study adopts an alternating direction method of multipliers-based optimization strategy, offering scalability and theoretical robustness for large-scale MEC environments. Future 6G networks must support multifunctionality and intelligence to realize their full potential, facilitating the connection of trillions of devices that sense, compute, connect, and analyze data ubiquitously [25]. Unlike previous generations (1G–5G), where users were primarily communication endpoints, users in 6G will also serve as sensing targets, energy receivers, AI nodes, and service consumers. Central to this paradigm shift is the design of advanced MA techniques that efficiently allocate resources across a heterogeneous user base.
While SDMA and Orthogonal Frequency Division Multiplexing (OFDM) access have dominated past generations, growing interest is now directed toward NOMA schemes [26]. However, the traditional OMA vs. NOMA distinction is increasingly seen as insufficient for capturing the complexity of modern MA needs [27]. To address this, Ref. [28] proposed a new classification approach based on how multiuser interference is managed, introducing Rate-Splitting Multiple Access (RSMA) as a unifying and flexible MA strategy. RSMA seamlessly integrates OMA, power-domain NOMA, SDMA, and physical-layer multi-casting, offering practical advantages for deployment in diverse 6G scenarios [29].
Several studies have examined the intersection of 6G and AI, focusing on network design and performance optimization. For example, Ref. [30] presents an in-depth overview of the 6G communication architecture and emphasizes the integration of AI across key areas such as TeraHertz (THz) communication, satellite networks, holographic communication, and quantum communication. The paper also identifies challenges like spectrum scarcity, EE, and ethical considerations, while underlining the importance of global standardization and multi-stakeholder collaboration. Expanding on this, the work [31] outlines a three-stage framework for AI integration in 6G networks: AI for network, which uses AI to enhance performance and efficiency; network for AI, which supports AI functions with enabling infrastructure; and AI as a service, where AI functionalities are embedded directly into the network. The study explores standardization efforts in this emerging area.
Furthermore, Ref. [32] provides a historical and architectural analysis of wireless networks, positioning 6G as a transformative leap enabled by technologies such as THz communication, ultra-massive Multiple-Input Multiple-Output (MIMO), quantum communication, and Reconfigurable Intelligent Surfaces (RIS). The role of AI and ML is again highlighted as central to achieving intelligent, self-optimizing networks, with applications ranging from smart cities and AVs to brain–computer interfaces. The study also addresses challenges such as regulatory complexity, security, and spectrum limitations, concluding that 6G is a foundational step toward even more advanced networks, such as 7G.
Among the new MA techniques gaining attention for 6G is Fluid Antenna Multiple Access (FAMA). As introduced in [33], FAMA uses fluid antennas to dynamically adjust antenna positions and maximize SNR with a single RF chain. The paper presents a detailed taxonomy of FAMA, examining its system architecture, channel modeling, diversity gain, and integration with other 6G technologies, including RIS, MIMO, THz communication, and AI. FAMA leverages the natural fading characteristics of wireless channels to manage interference, promising to be a valuable component of future 6G systems. Ref. [34] reviews the role of AI in enhancing DSA in wireless networks, focusing on CR, SS, and RM using DL and RL. The paper highlights the potential of generative AI for future 6G systems while addressing key challenges, including data privacy, model complexity, scalability, and regulatory issues. The authors call for further research on AI reliability, ethical deployment, and real-time implementation in edge environments to support intelligent and adaptive wireless communication. Table 1 presents a comparative review of the current literature on the integration of AI with 6G communication systems. It focuses on primary areas, including access technologies, DSA, AI-driven network optimization, and network architecture innovation. The table provides a syntactic summary of current innovations and remaining challenges in 6G and beyond, as well as in AI-powered wireless communication. It also outlines prospective future directions, representing the main contributions of each paper, their limitations, and further ideas.

Fig. 1 gives a 3-tier hierarchical model that sketches the integration of intelligence and networking capabilities, which is the core of the research. The high-technology MA Techniques of the core layer include NOMA, RSMA, and Massive MIMO. Such physical-layer technologies control Spectral Efficiency (SE) and significant connectivity but impose complex configuration issues (e.g., power allocation and beamforming vectors). The Outer Layer, which serves as the intelligence engine, addresses these issues. The AI/ML tools that apply DL, more specifically DRL, address the dynamic, non-linear optimization problems of the core network. More importantly, Explainable AI is present in this layer to make these complex control decisions trustworthy and transparent. The Middle Layer of 6G Enablers, which is based on MEC, FL, and DSA links, connects these two layers. The indicated bi-directional flow is a symptom of a symbiotic relationship: AI4NET (AI as an agent of DRL becomes a reality as the parameters of RSMA are dynamically optimized by DRL agents running on MEC servers to match a variety of quality-of-service needs). On the other hand, the network providing the low-latency data streams (i.e., MIMO) and the distributed computing infrastructure (through FL/MEC) needed to constantly and continuously train and evolve the AI models ensures NETwork for AI (NET4AI). Such an organized combination of integration allows the system to intelligently and dynamically cope with a very complex communication environment.

Figure 1: Conceptual framework of AI-empowered multiple access for next-generation (6G) networks
The study is based on a Systematic Literature Review (SLR) following the PRISMA 2020 guidelines to transparently identify, evaluate, and synthesize research on AI-enabled MA-6G wireless networks. The review will unify and segment the main trends in AI-based SS, resource dynamic assignment, protocol design, and the optimization strategies that the 6G communication system must address. The initial search yielded approximately 300 publications. After removing duplicates and applying inclusion/exclusion filters, 153 papers were shortlisted. Following quality assessment and full-text screening, 30 highly relevant studies were selected for detailed analysis, supplemented with benchmark works and standards to ensure completeness.
3.1 Research Question and Aims of the Review
The motivation for this review is the increasing use of AI in 6G wireless communication, where conventional MA schemes are no longer applicable to meet strict performance criteria, such as ultra-low latency, massive connectivity, and high spectral and energy efficiency. Although extensive literature exists on AI in wireless communication, a structured, systematic review that explicitly examines AI-enabled MA mechanisms, such as SS, DSA, resource optimization, and smart protocol construction, has not been available. Consequently, this SLR has the following objectives:
1. Determine and categorize the state-of-the-art AI-based MA schemes for 6G networks.
2. Measure the performance of AI techniques (ML, DL, RL, FL, and XAI) to SS, interference management, and protocol adaptation.
3. Emphasize optimization methods, security issues, and new research to steer future 6G developments.
3.2 Scope and Research Questions
This review will focus on studies published after 2023 that refer to AI-assisted MA schemes in 6G or higher wireless networks. The review contains both theoretical and empirical publications on AI algorithms, network architecture, optimization schemes, and implementation issues. To direct this study, the following research questions (RQs) were developed:
RQ1: What are some of the most essential AI-based solutions in use in 6G multiple access design?
RQ2: What is the use of AI techniques (ML, DL, RL, FL, and XAI) in spectrum sensing, resource allocation, and protocol optimization?
RQ3: What computational models and optimization techniques are applied to enhance efficiency, scalability, and reliability in AI-enabled MA systems?
RQ4: What are the current research gaps, constraints, and outstanding challenges of implementing AI-based MA frameworks for 6G?
The studies were included according to the following criteria: focusing on AI-based MA of the 5G, 6G, or higher networks. Direct application of AI/ML/DL/RL/FL/XAI to SS, protocol design, optimization, or RA. Journal articles, conference papers, and review studies with the publication date between 2020 and 2025. These are studies that lead to performance enhancement, energy savings, security, or interference control in MA designs.
The exclusion criteria were as follows: Literature was limited to physical-layer-based communication or non-AI-based MA methods. Articles that talked about AI in wireless communication without mentioning MA or RM. Non-peer-reviewed sources (e.g., case studies, theses, white papers). Articles that are not based on experimentation, simulation, or theoretical rigor on AI-driven MA.
3.4 Information Retrieval Methodology
Reputable academic databases, such as IEEE Xplore, ScienceDirect, SpringerLink, Wiley Online Library, Elsevier, and MDPI, were systematically searched, with a cross-check using Google Scholar. The search strategy involved the combination of keywords with the Boolean operators to make sure that all the relevant information was retrieved: (“AI” or “ML” or “DL” or “RL”) and (MA) and (NOMA) and (RSMA) and (SS and (DSA) and (Protocol Design) and (Optimization) and (6G, beyond 5G, or Next Generation Networks).
3.5 Data Retrieval and Synthesis
Information was systematically extracted and organized into a structured table, covering the following elements: publication year, research focus, AI method used, contributions, limitations, and future scope. The review used qualitative content analysis to identify trends across four dimensions: spectrum sensing and DSA based on AI; AI-assisted protocol design and scheduling; resource allocation and optimization through AI-based schemes; and AI-empowered MA system security, privacy, and explainability. The taxonomy of the reviewed literature is summarized in visual and tabular maps, as shown in Fig. 2, which illustrate the interrelationships among AI techniques, computational models, and real-world application examples.

Figure 2: PRISMA-style flow diagram of the study selection process
3.6 Novelty and Scope of the Survey
1. This study is methodologically rigorous, which guarantees reproducibility and academic validity. The main contributions are: A systematic review of AI-based MA studies on 6G, demonstrating interdependences between the mechanisms of access and learning paradigms.
2. New mapping of enabling technologies (NOMA, RSMA, RIS, MEC, ISAC, etc.) to appropriate computational models and areas of application.
3. An extensive taxonomy of AI approaches to multiple access, the establishment of their comparative advantages, constraints, and new roles in adaptive communication.
4. Critical research gaps and vision on how to construct secure, explainable, and real-time 6G AI-driven MA frameworks are identified.
4 Fundamentals of 6G and Multiple Access Technologies
6G envisages a paradigm shift against what is currently being offered by 5G and is expected to provide wireless communication abilities called THz communications, holographic beamforming, ultra-massive connections, and sub-millisecond latency. These innovations should meet the demanding needs of future applications, such as immersive XR, autonomous systems, and real-time digital twins. The key to meeting these objectives is advancing MA technologies that govern how scarce spectrum is shared among various users and devices. Although established OMA technologies, including TDMA, FDMA, and Code Division Multiple Access (CDMA), among others, have been successfully used by previous generations, they have severe limitations in the more dynamic, dense conditions expected in 6G. As a result, alternative multiple access schemes have been proposed, with NOMA schemes prioritized (such as Power Domain NOMA, SCMA, and multi-user shared access). In this section, some MA techniques for 6G will be discussed, along with their development, key distinctions, and limitations for designing future connection requirements.
4.1 Key Features of 6G Technology
To meet the stringent performance requirements of sixth-generation (6G) networks—particularly ultra-low latency, extreme data rates, and massive connectivity—recent research has focused on enhancing electromagnetic wave (EW) propagation characteristics, including reflection, refraction, and diffraction. In this context, Index Modulation (IM) techniques have gained significant attention for their ability to exploit reconfigurable antenna structures to transmit additional information, thereby improving SE. The study in [35] introduced a hybrid approach combining IM with Reconfigurable Intelligent Surfaces (RIS), where RIS elements were deployed not only at the receiver but also along the transmission path to enhance signal quality. Two novel modulation schemes—RIS-space shift keying (RIS-SSK) and RIS-spatial modulation (RIS-SM)—were proposed, offering higher energy efficiency and greater structural simplicity than conventional massive MIMO systems. Moreover, the authors designed greedy and maximum-likelihood (ML) detection algorithms to further optimize detection performance.
Similarly, the work [36] proposed a non-orthogonal waveform (NOW) modulation technique based on faster-than-Nyquist signaling for DFT-s-OFDM systems. This approach significantly improved SE and reduced peak-to-average power ratio (PAPR) by 1.8–5.8 dB compared to traditional orthogonal waveforms, making it suitable for 5G and 6G communication systems. Fig. 3 demonstrates a high-tech wireless communication case involving an RIS to streamline the connection between an Access Point (AP) and several User Equipment (UEs). The system is designed to overcome signal blockages and enhance spectral efficiency when direct Line-of-Sight (LoS) links have been damaged. The intelligence core of the system is the RIS Controller; it is not a passive relay but rather a dynamically calculated, actively varied set of phase shift and amplitude reflection coefficients (Φ) for the innumerable RIS elements. This is done by a special low-rate control connection (usually wired or short-range wireless) to the RIS. The main objective is joint optimization: the RIS’s passive beamforming is optimized to maximize the received signal at the targeted UE while simultaneously avoiding nulls or reducing interference to other non-target UEs and any additional interfering sources. This adaptive control can be based on channel state information (CSI) obtained through uplink probing and following ML models applied on the RIS controller or a centralized server so that there is a high level of transmission reliability and advanced interference management over the coverage region without extra transmission power expenditure.

Figure 3: RIS-assisted signal enhancement for user connectivity
The millimeter-wave (mmWave) spectrum, initially introduced in 5G New Radio (NR), remains a cornerstone for 6G communications owing to its capability to provide bandwidths up to 300 GHz, significantly exceeding the capacity of sub-6 GHz technologies. According to Shannon’s theorem, this expanded bandwidth directly enhances channel capacity, enabling ultra-high-speed data transmission. Additionally, the shorter wavelengths of mmWave facilitate the integration of compact, high-gain antenna arrays that support directional beamforming, beneficial for both secure communications and sensing applications. However, mmWave systems face challenges such as high path loss, LoS dependency, and mobility-induced fading, necessitating advanced beam management and mobility solutions [37,38]. As data rates approach Tbps, the THz band (0.1–10 THz) has emerged as a promising solution for ultra-high-speed wireless links, backhaul/fronthaul connectivity, and the Internet of Nano-Things (IoNT) [39]. Despite challenges such as severe propagation loss and limited transceiver power, recent advancements—including distance-aware physical-layer designs, ultra-massive MIMO, and RIS-assisted THz links—have demonstrated communication ranges exceeding 100 m under both LoS and NLoS conditions.
Looking ahead, 6G networks will integrate multiple enabling technologies—such as sub-THz and visible-light communication (VLC), ultra-dense networks, and aerial platforms—to achieve data rates ranging from 100 Gbps to several Tbps. Initial deployments are expected to rely on sub-THz frequencies for short-range LoS communications, supported by multi-polarized high-gain antenna arrays. Nonetheless, hardware complexity and energy efficiency remain significant design challenges [40]. Fig. 4 shows some of the applications of WLAN-based THz that exploit the vast amount of unlicensed bandwidth in the sub-THz and THz (100 GHz–10 THz) bands. This extreme bandwidth is needed to enable Virtual/Augmented Reality (VR/AR) and holographic communication, where peak data rates of over 100 Gbps are required to deliver immersive, high-fidelity content with nearly zero latency. The figure shows short-range, high-gain THz connections that employ the highly directional, pencil-beamforming antennas to overcome the significant path loss and atmospheric absorption of the band. Moreover, the presence of two-hop THz links highlights the need for multi-hop relaying and mesh networking topology to expand coverage and provide strong connectivity, as THz signals have a limited communication range.

Figure 4: Emerging WLAN applications
Fig. 5 presents the principal architectural characteristics of THz-based 6G networks, focusing on the incorporation of THz functionality at the network level. It requires deep integration with the architectures of technologies such as SDN and Network function virtualization to control highly sporadic, bandwidth-intensive THz traffic dynamically. To indicate the role of the IoT data center, the data centers are explicitly mentioned to show how THz communication will be used to provide high-speed, high-density intra-data center connections. This approach outgrows traditional fiber to support the high data transfer and processing requirements of the vast amount of data generated by mMTC. Lastly, the figure underscores the importance of THz in high-speed access-backhaul systems. In this role, THz links are used both as wireless fiber to provide multi-gigabit-per-second connectivity to last-mile access (e.g., small-cell front-haul) and as high-capacity backhaul links to connect macro Base Stations (BSs) to the core network, exploiting the low latency and high throughput of the band.

Figure 5: THz-enabled 6G architecture with base stations, small cells, and relay links
Optical Wireless Communication (OWC) has also emerged as a strong complement to the THz spectrum, offering interference-free, high-capacity, and ultra-low-latency wireless links. The optical domain comprises infrared (IR) (760 nm–1 mm), visible light (360–760 nm), and ultraviolet (UV) (10–400 nm) bands, each serving distinct application scenarios [41]. Among these, Visible Light Communication (VLC) enables dual-purpose operation for both illumination and data transmission. Recent advancements in blue laser-based lighting have achieved data rates up to 26 Gbps, allowing high-speed light-based IoT applications [42,43]. Meanwhile, UV communication supports non-line-of-sight (NLoS) transmission through atmospheric scattering, though safety considerations remain crucial [44]. OWC technologies—including LiFi, VLC, Optical Camera Communication (OCC), Light Detection and Ranging (LiDAR), and Free Space Optics (FSO)—are enabling high-throughput connectivity across diverse domains, such as indoor, vehicular, underwater, and satellite systems. While OWC can provide up to 1000× more bandwidth than conventional RF systems, its performance is constrained by the bandwidth of optoelectronic components. Recent innovations in high-speed LEDs and silicon photomultipliers have achieved data rates exceeding 1 Gbps. To mitigate orientation sensitivity, techniques such as adaptive spatial modulation and multi-directional transmitters are being explored [45].
In scenarios where fiber deployment is impractical or economically infeasible, Free Space Optics (FSO) offers a robust backhaul/fronthaul alternative, achieving fiber-like data rates and seamless integration with optical networks—particularly advantageous for micro-cellular and mobile backhaul systems [46]. Expanding beyond terrestrial systems, three-dimensional (3D) network architectures integrating terrestrial, aerial (UAVs), and satellite communications are being developed to achieve global coverage and ubiquitous connectivity. This unicellular network paradigm eliminates traditional cell boundaries, enabling continuous handovers and user-centric communication across heterogeneous technologies [46]. Finally, spectrum scarcity and underutilization remain major challenges in 6G systems. DSM and Cognitive Radio (CR) techniques will play pivotal roles in enhancing SE through listen-before-talk and AI-driven adaptive optimization [47,48]. Moreover, Symbiotic Radio (SR) extends the CR concept by combining it with ambient backscatter communication (AmBC), enabling low-power passive IoT connectivity. Emerging frameworks that integrate AI and blockchain are being explored to allow for secure, autonomous, and transparent spectrum sharing, further reinforcing the intelligence and resilience of future wireless networks [49].
Finally, 6G vision goes vertical, with satellites and UAVs as BS, to offer unprecedented flexibility and continuous global coverage, reaching even previously unreachable areas (oceans and mountainous regions) via a decentralized design. In addition to spectral innovations and architectural changes, the transformative power of 6G lies in the fact that central intelligent technologies connect the physical and digital worlds. The push for ultra-low latency and enormous processing power creates a need to transition to Edge Computing. This approach strategically distributes computational units across the User Equipment (UE) to keep delays to a minimum, supporting applications that require real-time performance, such as self-driving automotive and industrial automation. This spread intelligence, which frequently employs Federated Learning, is vital in privacy and hyper-responsiveness [50]. Moreover, Digital Twins (DTs), which are high-fidelity virtual replications of the network, enable continuous simulation, fault prediction, and dynamic resource allocation. This addresses the complexity of managing the network and improves the fidelity of the system and performance in real-time [51]. Importantly, 6G is also built to serve as the base communication layer of the Metaverse and the high-resolution Extended Reality (XR) applications. These applications require Tb/s data rates and milliseconds latency to be effectively immersive and ubiquitous hologram experiences, which will satisfy one of the fundamental design principles of 6G architecture. Last but not least, Blockchain is incorporated to support the security of this hyper-connected ecosystem by offering a decentralized registry. This establishes secure, transparent operational structures, such as dynamic spectrum sharing and identity management, to enhance the reputation and reliability of the 6G network [52].
4.2 Analysis of Strengths and Limitations across 6G Technologies
To achieve the promises of 6G wireless communication, a wide range of cutting-edge technologies is being developed and implemented. All these technologies, whether high-frequency communication schemes such as mm-wave and THz bands, OWC, RIS, or DSM, offer distinct benefits and address specific 6G challenges within the ecosystem. But they largely depend on use-case scenarios and deployment environments, with performance, applicability, and limitations varying accordingly. In this section, a comparative analysis is conducted to evaluate the most significant enabling technologies of 6G on various key metrics, including data rate, spectral efficiency, hardware complexity, coverage, and adaptability. This comparison is presented in Table 2 to provide a clear understanding of how these innovations relate to one another and to the broader context of future global, intelligent, and ultra-fast wireless networks.

4.3 Overview of Traditional and Modern Multiple Access Techniques
Since the primary goal of 6G wireless networks is to achieve new heights in connectivity, capacity, and intelligence, it is necessary to innovate at the PL. The future 6G systems have to facilitate ultra-high data rate, massive connectivity of devices, improved spectral and EE, and management of signals in diverse and dynamic circumstances. A diverse set of more sophisticated technologies is being considered to achieve these lofty targets, including new modulation and coding strategies, NOMA, ultra-massive MIMO, and intelligent surface technology. Innovations that complement spectrum utilization and system performance are also being reviewed to optimize spectrum use, such as in-band full-duplex communication, orbital angular momentum-based transmission, and Holographic Radio (HR). These technologies, when combined, form the basis for a flexible, efficient, and intelligent PL that promises to lay the foundation for 6G’s transformative capabilities.
Fig. 6 illustrates the paradigm of ubiquitous 6G-enabled connectivity, highlighting the seamless networking across multiple layers between terrestrial and non-terrestrial networks (NTNs) to enable global data communication and sharing. The situation leverages the key characteristics of 6G—RIS, THz communication, and an integrated SDN Core—to connect highly diverse environments. The networks encompass connectivity between space networks (such as Low-Earth Orbit (LEO) satellites to provide a backhaul), airborne networks (UAVs and High-Altitude Platform Stations acting as aerial BSs), earth networks (dense ground infrastructure), sea networks (maritime communication/sensing), and submarine networks (acoustics or special underwater RF/Optical links). Ease of integration is technically achieved through network slicing and AI-based resource orchestration, which dynamically assign and control connectivity between these incompatible areas. This architecture provides global coverage not only for simple data sharing but also for real-time sensing, edge processing, and shared situational awareness, effectively making the world a single, non-homogeneous, heterogeneous communications and computing platform.

Figure 6: 6G application scenarios across space, air, land, sea, and underwater domains
As 6G aims to enable comprehensive applications, utilize the whole spectrum, ensure global connectivity, support all sensory modalities, provide robust security, and drive complete digitalization, the demand for diverse and sophisticated application environments becomes evident. Achieving ultra-high data rates in the terabit-per-second range, supporting massive connectivity, expanding coverage, and ensuring secure communication present various new challenges, particularly in waveform and modulation design. Waveform design is a cornerstone of communication system performance and must be tailored to meet the specific requirements of different 6G use cases. While 5G systems primarily relied on multi-carrier waveforms, such as OFDM, for their high spectral efficiency, the unique demands of 6G call for more specialized waveform strategies. The use of higher-frequency bands, as expected in 6G, introduces challenges such as increased path loss and the need for efficient broadband power amplification at these frequencies. To mitigate these issues, research has examined low-PAPR single-carrier waveforms, as highlighted in recent studies.
In highly mobile environments, orthogonal time-frequency space (OTFS) and other transform-domain waveforms offer advantages due to their effective handling of Doppler shifts and delay. In scenarios where maximizing data throughput is critical, methods like overlapped multiplexing and SE-oriented frequency multiplexing have been explored to enhance SE [53]. Additionally, integrating ISAC technologies requires waveform designs that support both data transmission and environmental sensing. Modulation practices also play a central part in determining the presentation and reliability of communication systems. While Quadrature Amplitude Modulation (QAM) remains dominant in 5G New Radio (NR) and Long Term Evolution (LTE), emerging alternatives have attracted attention due to their potential advantages. These include group interpolation, asymmetric QAM, multidimensional and specific QAM, and IM. These methods have demonstrated benefits, including lower PAPR, improved robustness, and enhanced performance under diverse channel conditions.
Robust channel coding plays a crucial role in enhancing the trustworthiness, throughput, and performance of contemporary communication systems. The evolution of Error-Correcting Codes (ECCs) has transitioned from algebraic to probabilistic methods, significantly enhancing performance. Among the most widely adopted ECCs are polar codes, Low Density Parity Check (LDPC), and Turbo codes, which are standards for 4G and 5G communication and are used in the data and control channels, respectively. Despite their differences in decoding techniques, these codes share a common Bayesian foundation. They are viewed as strong contenders to meet the stringent demands of 6G, such as ultra-low latency and EE. All three are linear block codes and can be decoded using Belief Propagation (BP) techniques. LDPC codes benefit from sparse parity-check matrices, Turbo coding employs the Bahl, Cocke, Jelinek, and Raviv (BCJR) algorithm over trellis structures, and Polar codes rely on Successive Cancellation (SC) decoding. For short block lengths, where polarization weakens, advanced methods such as SC flip/list and BP flip/list decoding are necessary to improve performance [54].
Although their decoding principles are related, improvements in encoding—such as optimized generator polynomials—have led to simpler factor graphs and more energy-efficient hardware implementations. For instance, 4G LTE uses a partitioned Max-log-BCJR Turbo decoding scheme. However, 5G NR employs LDPC decoding with adaptive min-sum belief propagation and Polar code decoding via node-oriented successive cancellation. Moving toward a unified circuit-level ECC architecture is critical for 6G. With 6G’s requirements for ultra-reliability and minimal latency, shorter code lengths are expected, reducing the efficiency of traditional decoding due to decreased polarization and randomness. Alternative solutions, such as near-maximum-likelihood decoding—including ordered-statistics decoding and guessing-random-additive-noise decoding—offer improved consistency. Additionally, new coding strategies, such as polarization-adjusted convolutional codes and 2D spatiotemporal coding for massive MIMO, can deliver better reliability and throughput with low decoding delay, making them promising candidates for next-generation systems [55].
The move from LTE employing OFDM access to the upgraded OFDM in 5G NR reflects the ongoing reliance on Orthogonal Multiple Access (OMA) strategies. As 6G networks aim to accommodate far higher connection densities than 5G, Non-Orthogonal Multiple Access (NOMA) is considered a promising solution to address critical requirements, including ultra-low latency, cost-effectiveness, large-scale connectivity, and robust reliability [56]. Initially introduced by Nippon Telegraph and Telephone, NOMA departs from the outmoded OMA, which allows multiple user terminals to access the same radio resources by separating them in time, code domains, or frequency. It introduces intentional intrusions during transmission and relies on Successive Interference Cancellation (SIC) at the receiver to separate the signals of different users. This approach improves SE and system capacity, reduces latency, and decreases dependency on CSI. Nevertheless, this results in higher system complexity.
Various NOMA schemes include Power-based NOMA, Code-oriented NOMA (such as SCMA), and Interleave-Based NOMA. Integrating NOMA with other emerging wireless technologies has shown promising results—these include mmWave communications, Massive MIMO systems, VLC, EH, PL Security, CR, collaborative communication, and Wireless Caching. For example, research explores combining NOMA with Artificial Noise in Massive MIMO-NOMA to enhance secrecy and EE. Similarly, incorporating RIS into NOMA frameworks improves passive beamforming and deployment efficiency. Studies also examine NOMA-assisted AmBC as a means of enhancing both spectral and EE, while maintaining system reliability and security. Despite significant academic and industrial interest in NOMA, its full implementation in 5G networks was hindered by technical limitations. For 6G systems, effectively leveraging NOMA will require simplified yet robust multi-user interference suppression, robust security and reliability provisions, and the formalization of a standardized NOMA framework [57].
Ultra-massive MIMO, building on the foundational work by Marzetta in 2009, extends the capabilities of massive MIMO—a cornerstone of 5G known for its SE gains. As 6G evolves, ultra-massive MIMO envisions deploying hundreds to thousands of antennas to further enhance EE, network flexibility, coverage, and precise positioning, particularly across broader frequency bands [58]. This technology promises significant improvements, such as advanced multiplexing, interference mitigation, energy savings, and expanded support for NTNs. Its superior spatial resolution enables accurate 3D positioning, particularly in complex environments. However, near-field and wideband effects, which introduce sparsity in the angle and delay domains, pose new challenges. To address them, research is actively progressing in channel modeling, beam management, codebook design, and beam training. As ultra-massive MIMO expands into high-frequency bands, such as THz and mmWave, ongoing investigations include modulation schemes, channel characteristics, and circuit design. An emerging trend is the integration of RIS, which offers passive alternatives to active antennas, thereby improving coverage, capacity, and energy efficiency. Additionally, distributed ultra-massive antenna systems—where antenna elements are spread across large areas—can maintain high SE, reduce power consumption, and provide consistent service quality [59].
AI is increasingly being used to enhance ultra-massive MIMO operations, including beamforming, channel estimation, and user identification. Despite its potential, real-time deployment and data scarcity remain barriers. Looking ahead, ultra-massive MIMO is also being explored for its applicability in integrated space-air-ground-sea networks, including skywave, underwater acoustic, and satellite communications, broadening its impact beyond terrestrial wireless systems.
Coordinated Multi-Point (CoMP) is a communication strategy where multiple access points work together to serve mobile users, effectively creating a network-layer MIMO system. This approach extends spatial diversity beyond what is achievable with conventional physical-layer MIMO techniques. First introduced in 3rd Generation Partnership Project (3GPP) Release 11 for LTE-Advanced, CoMP has become increasingly crucial in 5G, especially for mitigating downlink inter-cell interference and supporting joint uplink user detection [60]. In the context of 6G, with the expansion into high-frequency spectrum bands above 10 GHz, the role of CoMP becomes even more critical due to challenges such as signal blockage. By utilizing diversity at the BS level, CoMP complements antenna-level spatial techniques and enhances system reliability.
CoMP also supports the evolution toward cell-free Radio Access Networks (RANs), where UE can simultaneously connect to multiple BS within the same radio access technology. In this architecture [61], a centralized processing unit manages coherent transmissions across many distributed, single-antenna APs. Recent studies suggest that cell-free massive MIMO can outperform traditional cellular MIMO in terms of fronthaul efficiency and overall system performance. However, despite its advantages, CoMP faces key implementation challenges. Effective deployment depends on how BS are clustered, which remains an area of active research. Additionally, ensuring tight synchronization among cooperating BSs is essential to prevent inter-symbol and inter-carrier interference. Coherent channel estimation and equalization across multiple BSs further increase the system’s computational demands.
In-Band Full-Duplex (IBFD) is an emerging wireless communication technique that enables simultaneous transmission and reception on the same frequency band, offering a potential twofold increase in SE and greater flexibility in network access compared to conventional Frequency-Division Duplex (FDD) and Time-Division Duplex (TDD) systems. Although its foundational concept dates back to continuous-wave radar systems, the practical application of IBFD has gained momentum only in recent years, thanks to advances in interference cancellation.
Research [62] efforts now focus on a range of IBFD use cases, including relay communication, multi-node full-duplex systems, and joint radar-communication platforms, which promise gains in throughput, sensing, and simultaneous connectivity. However, a significant hurdle remains in effectively managing self-interference that occurs when a device transmits and receives on the same channel. To address this, scientists are developing SIC methods, both electronic and optical, particularly for sub-6 GHz applications. As bandwidth increases, challenges become more complex, especially for THz and OWC bands—key targets for 6G. Solutions under investigation include shared antenna structures, iterative interference suppression, and theoretical modeling of full-duplex systems. Notably, Optical SIC shows promise in high-frequency environments due to its broad bandwidth and precision. These developments mark IBFD as a transformative technology for future wireless systems, with ongoing research aimed at overcoming the technical barriers to real-world deployment.
Orbital Angular Momentum (OAM), a natural attribute of EW, introduces a novel dimension to wireless communication by using spiral-phase wavefronts to carry information. Different orthogonal OAM modes can be transmitted simultaneously over the same frequency band using distinct antennas, significantly enhancing SE and channel capacity without additional bandwidth. Initially explored in optical systems, OAM has expanded into the radio, acoustic, mmWave, and THz domains. Its potential is evident in applications such as FSO, optical fiber links, and acoustic channels, where it supports high-data-rate transmissions. Combining OAM with MIMO architecture further boosts communication capacity. The study [63] proposed two OAM-based MIMO frameworks that achieve improved throughput across diverse scenarios. However, practical deployment faces challenges such as beam divergence, alignment sensitivity, and limitations in NLoS environments. Despite these obstacles, OAM has shown promise in emerging areas such as radar systems and microwave sensing, positioning it as a promising enabler for 6G. To achieve real-world adoption, ongoing work must focus on improving beam control, refining system models, and addressing commercialization barriers.
As wireless communication evolves beyond 10 GHz to meet growing data demands, challenges such as increased signal attenuation, reduced diffraction, and heightened interference emerge. While massive MIMO with active beamforming in the mmWave spectrum offers a partial solution, it is power-intensive and complex. Consequently, researchers are exploring alternative technologies, such as RIS. RIS consists of programmable metasurfaces that passively manipulate EW by adjusting their reflective properties. By deploying RIS on surfaces like walls and ceilings, wireless environments can be transformed into smart radio environments, enabling enhanced signal strength with reduced energy consumption compared to active massive MIMO systems. Unlike traditional massive MIMO arrays tailored to specific radio technologies, RIS operates effectively across a broad range of frequencies, including both RF and optical bands, making it an economical solution for ultra-wideband 6G networks.
Despite its advantages, RIS implementation faces hurdles, including accurately modeling near-field channels and the complexity of integrating devices into third-party infrastructures not owned by mobile network operators. Therefore, standardized frameworks, interface agreements, and communication protocols are crucial for widespread deployment across public and private domains [64]. Additionally, RIS shows promise in aerial network scenarios. UAVs, which are becoming integral to 6G networks due to their mobility and coverage capabilities, often experience THz propagation issues due to motion and obstructions. Recent studies have explored AI-driven solutions—specifically, attention-based models—for predicting 3D RIS beam configurations in UAV-assisted networks. These AI models have demonstrated improved performance over traditional LSTM and gated recurrent unit architectures.
HR introduces a novel approach to wireless communication by leveraging controlled interference from EW to shape and reconstruct the EW environment dynamically. Using spatially continuous microwave apertures enables fine-grained spatial multiplexing, achieving ultra-high spectral and EE, and supporting massive traffic loads and high-capacity demands. Holographic MIMO, a realization of this concept, represents the theoretical limit of multi-antenna systems confined to a finite surface. Rather than mitigating interference, HR exploits it as a constructive tool, enabling applications such as high-precision localization, wireless energy transfer, industrial automation, and Massive IoT (MIoT) connectivity. Additionally, it reduces the need for conventional CSI by capturing RF spectral holograms of transmitters via holographic interference.
Two primary methods are being explored for practical deployment: reconfigurable holographic surfaces, which use densely arranged subwavelength elements, and tightly integrated broadband antenna arrays supported by high-power unitraveling-carrier photodetectors [65]. Both techniques aim to deliver high performance while reducing power consumption and costs. Despite its promise, HR still faces notable challenges. These include the absence of robust theoretical models, the need for accurate and reliable channel modeling, and the difficulty of processing vast volumes of data with low latency and high reliability. Addressing these hurdles is critical for realizing the full potential of HR in future 6G networks.
Physical-layer multicasting is a physical technique in which a single coded bit stream can be interpreted as multiple unicast messages to different users. In contrast to conventional multicasting (such as radio or television), physical-layer multicasting allows each user to receive only the part of the stream intended for them (rather than everyone receiving or hearing the same message). The approach differs from OMA, where individual messages are sent at different times, frequencies, or code-based resources. The gain with this method of coding is achieved by jointly encoding multiple medium-sized packets into a single, longer packet, offering the reliability benefit, one of its main advantages. This aspect is beneficial in NTNs, such as geostationary satellite communications (e.g., DVB-S2X), which allows a single coded frame to be used by multiple users in each spot beam, thereby implementing a physical-layer multigroup multicast scheme [66].
Low latency is another advantage, as users can decode their messages in parallel, rather than waiting for TDD techniques. Additionally, this method enables interference-free connections by communicating only one stream, thereby reducing inter-user interference and enhancing overall link quality. But physical-layer multicasting is also compromised by some problems. The most attention-grabbing aspect is spectrum inefficiency, driven by the need for the multicast stream to be readable by everyone regardless of channel conditions. This limits weaker users to the edge, resulting in wasted spectral resources.
RSMA is a versatile and robust non-orthogonal transmission scheme designed for multi-antenna wireless networks, effectively handling diverse network loads, user distributions, and imperfect channel knowledge. RSMA works by splitting user messages into common and private parts, enabling flexible interference management by partially decoding interference and treating the rest as noise. Variants such as one-layer Rate splitting, Hierarchical RS (HRS), and Generalized RS (GRS) offer increasing flexibility and performance, with GRS supporting layered message splitting and user grouping. RSMA unifies and outperforms traditional schemes such as OMA, SDMA, NOMA, and multicasting by dynamically adapting to network conditions and offering superior spectral efficiency and EE [67]. It also enhances reliability, fairness, security, coverage, and latency, especially under imperfect CSI. However, its challenges include increased receiver complexity due to SIC, higher encoding and signaling overhead, and the need for complex joint optimization. Despite this, RSMA remains a leading candidate for 6G access due to its performance and adaptability.
Fig. 7 shows three advanced rate splitting schemes: 1-layer RS, HRS, and GRS, all of which are optimized for MIMO systems. Essentially, these strategies differ in how they split and encode user messages at the transmitter into common and private streams, enabling interference control. In the 1-layer RS (similar to NOMA), a common preamble is transmitted to all users, while dedicated streams are used for private transmission. Every user first uses SIC to decode and eliminate the common stream, then decodes the private information. HRS adds more flexibility, typically through the introduction of intermediate common messages or user groupings, multi-stage SIC, and a finer-grained treatment compared to the active cancellation of interference. GRS is the most comprehensive approach, dividing each user’s messages systematically into common components (understood by a predetermined group of users) and a personal component. GRS maximizes degrees of freedom by allowing complex multi-layer SIC protocols at the end-users, who are responsible for defining the exact user grouping and decoding sequence. This considerably reduces inter-user interference, thereby optimizing the overall spectrum efficiency and sum-rate performance.

Figure 7: Transmission frameworks for 1-layer RS, HRS, and GRS strategies in MIMO
4.4 Comparison of Multiple Access Techniques for 6G
As 6G continues to push the boundaries of connectivity, efficiency, and intelligence, MA schemes are becoming increasingly central. To create a clear picture of how traditional and emerging MA strategies can help meet the performance objectives of 6G, a comparison of these two approaches is provided. Table 3 summarizes the main features, advantages, and shortcomings of methods such as OMA, NOMA, PL Multicasting, RSMA, and other complementary frameworks, including IBFD, ultra-massive MIMO, and HR. The methods are analyzed in terms of SE, EE, implementation complexity, latency, robustness, and suitability for dense, high-mobility, or heterogeneous networks. This comprehensive comparative analysis not only demonstrates the merits of next-generation solutions, such as RSMA and OAM, but also highlights the weaknesses of the legacy approach, considering the essential features of the comprehensive architecture of 6G access technologies.

5 AI Techniques in 6G Wireless Communication Systems
Future wireless networks are evolving into intelligent platforms that integrate communication, sensing, computing, intelligence, and storage to deliver personalized, adaptive services. AI is central to this transformation, offering powerful tools for managing massive, diverse data—expected to reach 491 exabytes daily by 2025 [68]—and enabling real-time decision-making, predictive maintenance, and dynamic resource optimization. Unlike conventional systems with fixed rules, AI learns from data to adapt to changing environments, improve reliability, and jointly optimize multiple network modules. It enhances performance in both the RAN (e.g., AI-driven scheduling and energy-saving handoffs) and the core network (e.g., intelligent QoS, traffic management, and edge computing). Standardization efforts by the International Telecommunication Union, 3GPP, and International Mobile Telecommunications (IMT)-2030 are advancing AI integration, aiming to unlock its full potential in wireless communication infrastructure and services. Key AI technologies include ML (SL, USL, and RL), DL, optimization, game theory, and meta-heuristics, with ML and DL being the most widely applied in wireless networks.
Fig. 8 effectively illustrates the synergistic integration of three dimensions of AI in 6G networks, highlighting a comprehensive and mutually dependent feedback loop of intelligence and infrastructure essential for managing system complexity. The initial one, AI that Optimizes Network Functions (AI4NET), involves implementing AI in the control plane using DRL agents for autonomous control functions such as real-time resource orchestration, physical-layer parameter optimization (Massive MIMO beamforming, RIS phase shifts), and dynamic network slicing to optimize spectral efficiency. The second dimension, Network Infrastructure that Enables AI Operations (NET4AI), provides the necessary computational substrate based on MEC to support low-latency inference and 6G. Its massive connectivity enables FL to train models privately and in a decentralized manner. Lastly, the third dimension of AI, AI as a Cloud-Based Service (AIaaS), leverages the 6G low-latency backbone to provide complex AI services on demand to industry verticals (e.g., smart robotics, tactile health). This is made possible by abstracting AI algorithms through network function virtualization and SDN, which implies the commercialization of AI. All of these create fundamental functional interdependence: AI provides the data and compute through the infrastructure, while also offering the sophisticated intelligence needed to control and commercialize the heterogeneous 6G environment independently.

Figure 8: 6G–AI integration
ML is gaining widespread attention for its ability to learn patterns and system behavior through mathematical models, enabling tasks such as classification, regression, and decision-making in dynamic environments. The availability of advanced ML algorithms, vast datasets, and powerful computational resources enhances these capabilities. Once trained, ML models can operate efficiently with minimal arithmetic operations and be deployed within flexible, high-performance network infrastructures to support real-time data processing. Core ML types—SL, USL, and RL—are all applicable to 6G networks, enabling the efficient handling of massive metadata with reduced resource consumption. The ability of ML to predict and adapt to various constraints makes it essential for future communication systems, primarily as 6G aims to create intelligent, resource-efficient ecosystems. Beyond communications, ML impacts numerous areas of daily life and plays a key role in building a socially beneficial, AI-driven future. Moreover, business intelligence tools powered by ML help organizations extract actionable insights by delivering timely, relevant data. At the same time, 6G further enhances this process through autonomous, intelligent operations that optimize decision-making and performance [69].
In SL, models are trained using labeled data, where inputs are paired with known outputs. The learning process involves estimating key parameters—such as coefficients—based on previously collected data and their corresponding expected results. This approach is most effective when the joint distribution of input and output variables is well understood and can be derived from domain-specific knowledge [70]. For instance, in tasks like precipitation prediction, SL relies on historical input-output data to learn predictive patterns. In wireless communication, particularly at PL, SL can optimize power allocation and manage interference by temporarily adjusting transmission parameters. Beyond the PL, SL also finds applications in the network, transport, and application layers. As 6G evolves, it is expected to significantly enhance and influence the application of SL across these multiple layers. SL algorithms, such as Support Vector Machines (SVMs) and K-Nearest Neighbors (KNNs), leverage historical and real-time network data for tasks such as traffic classification and demand prediction. SVMs are effective at identifying congestion states, while KNNs predict upcoming high-demand periods based on past patterns. Ensemble methods like Random Forests (RF) and Gradient Boosting Machines (GBM) further enhance performance on high-dimensional datasets [71].
In USL, models work without labeled data and instead identify patterns or groupings within the input data independently. In 6G networks, USL is trained on input samples without predefined output labels, enabling tasks such as clustering, feature extraction, feature classification, distribution modeling, and generating samples from specific distributions. This approach [72] is particularly beneficial in complex scenarios, such as vehicular communications, where the limited coherence time at the PL demands faster, more adaptive decision-making. With the broad deployment of 6G, these learning methods will play a key role in higher-layer tasks such as node clustering, pairing, and efficient RA. USL techniques such as K-Means, Hierarchical Clustering, and density-based spatial clustering of applications with noise are effective for detecting anomalies and identifying underutilized resources in real-time traffic monitoring. These methods uncover hidden patterns without labeled data.
Semi-supervised learning, by contrast, operates with a small portion of labeled data combined with a larger set of unlabeled data. Unlike purely USL, this method leverages the limited labeled data to improve model performance while still utilizing the vast unlabeled dataset. In high-frequency communication environments, such as 6G, semi-supervised learning can enhance channel equalization and system monitoring by optimizing performance metrics while reducing computational complexity. Ultimately, 6G technologies are expected to make USL and semi-supervised learning more adaptive and intelligent across network layers. Semi-supervised learning combines a limited set of labeled data with a larger unlabeled dataset, making it a practical solution when annotated 6G data is limited [73].
RL involves an agent interacting with its environment to learn optimal actions based on the feedback or rewards it receives. In the context of 6G networks, RL enables more intelligent and adaptive decision-making, where agents collaborate with network nodes to fine-tune parameters and enhance the quality of service. This learning method bridges SL and USL approaches, using prior knowledge to guide learning while aiming to maximize long-term rewards. RL is well-suited for tasks such as RA and performance optimization in wireless systems, with the DRL architecture being applied to various network challenges.
The effectiveness of ML models depends significantly on the quality and volume of training data. Batch learning algorithms, which are suitable for offline processing of large, labeled datasets, often face limitations due to restricted data availability [74]. To overcome these challenges, advanced ML techniques, including Quantum ML, emerge as key enablers for 6G by leveraging cognitive intelligence and Edge AI to deliver high-accuracy, real-time solutions. Technology positions itself as a transformative element in future communication networks [75]. RL supports DRA and spectrum management in 6G by enabling adaptive decision-making. Q-Learning uses reward-based updates to optimize actions, such as bandwidth or power control, while Deep Q-Networks (DQN) handle complex environments using Neural Networks (NNs). MARL allows distributed optimization through cooperative or competitive interactions among network agents, such as BS.
DL, a subset of ML within AI, plays a crucial role in processing both SL and USL data. It enables systems to automatically learn complex patterns and relationships between inputs and outputs across multiple abstraction levels, reducing the need for manually designed features [76]. With the advancement of 6G communication technologies, DL is expected to impact all areas of intelligent networking by enabling real-time data collection and processing. It has already demonstrated effectiveness in applications such as network AD, fault diagnosis, intrusion prevention, and network configuration optimization.
An Artificial Neural Network (ANN) is a data-processing model inspired by the structure and function of the human brain, designed to learn patterns and perform tasks from observed data. Recognized as one of the core deep learning algorithms, an ANN uses a network of interconnected nodes—similar to biological neurons—that efficiently process large volumes of data. These networks consist of multiple layers, commonly referred to as multi-layer perceptrons, where each layer contributes to feature extraction and learning. Neurons in each layer apply activation functions to handle nonlinear transformations, and the design of these connections plays a crucial role in the overall performance of the network. By leveraging cognitive intelligence, ANNs can handle complex tasks. Through extensive training on large datasets, they generalize well to new, unseen inputs [77].
A DNN is an advanced form of ANNs designed for tasks like classification and generalization. It mimics the structure of the human brain, where multiple layers of interconnected neurons process and interpret complex information [78]. Just as the brain can distinguish between different images, a DNN can be trained to recognize patterns and classify inputs, such as images, speech, or handwritten characters. These networks use input vectors—often in matrix form—fed into layers of neurons, including input and hidden layers, enabling the model to learn intricate, nonlinear relationships. Due to this multilayered structure, DNNs are well-suited for handling high-level cognitive tasks that simpler linear models cannot manage. In the context of 6G, DNNs are expected to enhance communication systems by enabling faster processing and improved decision-making through more advanced and adaptable network architectures.
Federated Learning (FL) is a distributed ML paradigm in which many policies or organizations jointly train a shared model while keeping local data private, thereby ensuring confidentiality. In this approach, each client trains a model locally on its own data and transmits only parameter updates to a central server, which then combines them to improve the global model. In contrast to conventional centralized learning, FL trains directly on local devices, which is especially advantageous in privacy-sensitive domains such as healthcare and edge computing. As 6G emerges, FL is increasingly important for enabling intelligent, self-learning networks that can make real-time decisions with minimal latency. By reducing the need to transfer large datasets, FL improves training efficiency and scalability while minimizing data exposure risks. This learning technique is expected to support cognitive networking by combining insights from distributed data sources, thereby enhancing performance in scenarios such as real-time decision support, symptom identification, and network optimization [79]. As 6G evolves toward intelligent and autonomous communication systems, integrating FL with other AI approaches, such as predictive inference and DL techniques, will further enhance network performance. This integration enables the network to adapt, learn, and respond effectively without relying on centralized data storage. However, deploying FL requires careful design to maintain data integrity and security, especially when handling sensitive information.
Black-box algorithms, often associated with DL models, are advanced ML techniques in which the model’s internal workings remain hidden from the user or researcher [80]. In these models, only the outputs are observed, while the internal decision-making process remains opaque. This approach becomes especially relevant in 6G networks, where high specificity and complex architectures demand sophisticated prediction tools. Typically, black-box models are deployed on devices like smartphones and computers to make automated decisions or predictions without a clear understanding of how those outcomes are generated. These models are widely used in applications such as financial forecasting, fraud detection, and large-scale data mining, where transparency in decision-making is often secondary to performance. A black-box classifier is trained on a dataset and then used to make predictions on new data, relying entirely on the patterns it has learned during training. As 6G networks integrate advanced AI, the capabilities of black-box models will be significantly enhanced, allowing for more accurate and responsive systems. However, the interpretability challenge remains a key concern.
5.3 Comparison of AI Techniques in Wireless Networks
The needs of 6G networks, as they evolve into more intelligent and adaptive platforms, require integrating diverse AI techniques and ML methods to address the growing complexity, scale, and dynamism of wireless communication environments. Although each learning model—namely, SL, USL, RL, and DL—has specific advantages, it is essential to understand their relative strengths, weaknesses, and potential applications in the context of 6G systems. The subsection provides an in-depth comparison of the most common AI/ML schemes for 6G, with particular emphasis on the types of learning, key algorithms, strengths, and potential limitations. This comparative view is helpful to researchers and system architects in determining the most applicable techniques in any networking situation. A tabular analysis of the key AI and ML methods applicable to 6G communication networks is presented in Table 4. It defines the learning for each technique, provides representative algorithms, and summarizes their typical areas of application, primary advantages, and limitations. This tabular presentation provides a comprehensive understanding of the role of various AI approaches in creating intelligent, efficient, and autonomous 6G infrastructures.

6 Artificial Intelligence for Spectrum Sensing in 6G Multiple Access
Due to the growing need for enhanced spectral and EE at the MAC layer, researchers are increasingly focusing on SS methods that enable more effective channel access. These techniques play a pivotal role in MA performance. Recent developments demonstrate that incorporating artificial AI approaches—such as DL, RL, and SL—has notably boosted both spectral and EE in areas like SS, sharing, and interference control. Furthermore, integrating AI with advanced technologies like THz communication and RIS offers promising potential for DSM and efficient MA in future 6G systems. The 6G network innovation is based on leading technological enablers, comprehensively represented in Fig. 9, which highlights the transition to the ultra-reliable, intelligent, and high-capacity communication paradigm. This vision is achieved through a strategic combination of core technologies: THz Communication, which offers access to massive unlicensed bandwidth (0.1 to 10 THz), and geo-directional pencil-beamforming, which overcomes substantial path loss to realize Tbps peak data rates for applications such as holographic communication. RISs are software-controllable arrays of reflectors that are essential for dynamically adjusting phase shifts to overcome signal blockage, inhibit interference, and achieve large passive beamforming gains to improve coverage and reduce energy consumption. AI/ML is the fundamental intelligence layer that uses DRL to coordinate resources in real time to optimize functions such as dynamic spectrum sharing and adaptive beam alignment. Moreover, Blockchain provides a decentralized security layer and trust needed to control transactions within complex heterogeneous networks, including securing FL model updates and transparent spectrum sharing. Lastly, Quantum Technologies provides solutions to future security requirements in Quantum Cryptography/Key Distribution and high-precision network state awareness in Quantum Sensing. Together, all of these enablers provide the technical levers required for 6G to deliver unmatched performance across reliability, intelligence, and throughput indicators: capacity (THz), channel control (RIS), automation (AI), trust (Blockchain), and security (Quantum).

Figure 9: Pillars of 6G innovation
6.1 AI-Driven Spectrum Monitoring Techniques
Accurate SS is fundamental for enabling efficient spectrum sharing and minimizing multi-user interference in wireless communication. It underpins dynamic channel access and high-performance MA protocols, but it poses challenges due to its sensitivity to the sensing period. Longer sensing durations, while improving accuracy, can reduce transmission time and increase energy consumption, thereby lowering SE. To address these issues, AI-driven SS and DSM have garnered attention, particularly in CR networks, to improve spectrum utilization.
Recent studies have applied various AI techniques—such as SL, USL, DL, RL, and Q-learning (QL)—to enhance SE in 6G networks. For instance, SL and USL methods, such as SVMs, K-means clustering, and Gaussian mixture models, are used for cooperative SS [80]. DL methods, including KNN and LSTM-based models, are leveraged for spectrum monitoring, modulation detection, and feature extraction in CR networks. Moreover, multi-agent DRL is utilized for RA in device-to-device communications, enabling efficient decision-making without direct signal exchange.
AI techniques also show promise in THz and RIS-based spectrum management, addressing the complexity of RA and intelligent automation in new frequency bands [81]. SVM-based SL is further applied to cooperative SS in NOMA systems, achieving greater SE than traditional OMA by allowing multiple users to share the same time-frequency resources. Despite these advancements, challenges persist in classifier accuracy and managing interference as network density increases. Fig. 10 depicts an AI-enabled cooperative SS system that leverages parallel processing and data fusion to enhance spectrum awareness and utilization. The system’s input includes RF signals, which are synchronously acquired by various heterogeneous sensing devices (Gadgets 1–5). These signals are usually in time-series I/Q data or transformed spectrogram feature matrices, formatted and divided into two parallel streams (Stream A and Stream B). The system is based on a typical DNN architecture for feature extraction and classification, running them completely in each stream. All of the DNN Inference Engines run the trained model on their respective input data to produce “Sensing results,” which are usually soft (e.g., probability vectors indicating the presence or absence of the primary user) or intermediate feature vectors. The last one is the Sensor Data Integration module, which performs decision (or possibly feature-level) fusion of the parallel results. This procedure goes beyond basic voting by frequently using advanced algorithms, such as Weighted Majority Voting, the Dempster-Shafer theory, or a Fusion DNN, to combine sensing outcomes that are occasionally contradictory or complementary. This smart combination is essential for achieving a credible sense of decision (e.g., channel occupancy status or primary user identification). It will enhance the reliability and robustness of spectrum awareness by alleviating the impact of local fading and noise at individual sensors.

Figure 10: AI-empowered spectrum sensing using DNNs
6.2 Intelligent Spectrum Access
Efficient spectrum management is crucial for 6G networks, which must support ultra-high data rates, low latency, and massive device connectivity across a broad frequency spectrum, including mm-wave and THz bands. Unlike previous generations, 6G demands intelligent and adaptive strategies for spectrum allocation and interference control. AI plays a pivotal role in achieving this by enabling real-time analysis, predictive modeling, and dynamic decision-making. AI-powered techniques significantly enhance SS accuracy while reducing reliance on prior knowledge, making intelligent spectrum access a promising approach to improving SE. This is achieved by minimizing interference and enabling flexible spectrum sharing across time, frequency, and spatial domains [82].
Current research in intelligent spectrum access focuses on three main areas: spectrum sharing between primary and secondary users in CR networks, sharing across licensed and unlicensed bands (such as Wi-Fi and cellular networks), and sharing between active and passive users. Various AI models—such as Convolutional Neural Networks (CNN), K-means, Gaussian Mixture Models, and KNN—have been employed in CR networks to enhance resource optimization and classification performance [83]. Multi-task learning approaches using real-world datasets have demonstrated notable performance gains. A typical AI-assisted framework involves training DL models offline with historical RF data and applying them for real-time predictions. When concept drift is detected, the models are fine-tuned or retrained online. While DL delivers strong classification accuracy, it often entails high computational costs. Alternatively, SVM-based methods employing user grouping can effectively reduce overhead and improve detection accuracy.
RL techniques have also shown promise in optimizing coexistence strategies. Q-learning, for instance, is used to adjust LTE subframe allocation in carrier-sensing adaptive transmission and select carriers in LTE license-assisted access systems for improved coexistence with Wi-Fi [84]. RL approaches are further applied to strengthen fairness and alleviate congestion in scenarios involving spectrum sharing between active and passive users. Collectively, these AI-driven methods lay the groundwork for intelligent and efficient spectrum access in 6G networks. Fig. 9 shows an AI-enabled spectrum management infrastructure for 6G-enabled MIoT, implemented using a DRL with an actor-critic architecture. The system interacts with the environment by observing its states, choosing actions, receiving rewards, and using those experiences to update the actor and critic networks through a replay buffer. Fig. 11 presents an AI-enabled spectrum management infrastructure for 6G-enabled MIoT that leverages a sophisticated DRL framework based on the Actor-Critic architecture. This architecture models the spectrum allocation problem as a Markov Decision Process, with the network acting as the agent. The DRL agent interacts directly with the MIoT environment, characterized by a large number of devices and complex interference dynamics. The state vector observed by the agent at time encapsulates critical spectrum awareness information, such as the instantaneous CSI, primary user activity, current interference levels, and the QoS requirements of active MIoT devices. Based on this state, the Actor network (policy network) generates a probability distribution over the available action space. The actions typically involve resource orchestration decisions, such as dynamic channel assignment, power control, transmission scheduling, or adaptive beamforming vector selection for a specific set of MIoT devices. The Reward function quantifies the effectiveness of the chosen action. This reward is engineered to be a composite metric, often maximizing SE or network throughput while simultaneously penalizing interference and QoS violation penalties. This signal guides the learning process. The system utilizes an experience replay buffer to store transitions, which are then sampled asynchronously to update the Actor network (aimed at optimizing the policy) and the Critic network (aimed at estimating the value function). This off-policy learning approach enhances data efficiency and stabilizes training, ultimately enabling the DRL agent to converge on an optimal spectrum management policy that dynamically allocates resources to support ultra-dense, low-power connectivity demands in 6G MIoT.

Figure 11: AI-driven spectrum management in 6G massive IoT using a DRL actor–critic framework
Spectrum interference, also known as spectrum overlap, significantly affects sensing performance, necessitating the effective Management of Spectrum Interference (SIM) within sensing pipelines to optimize spectrum usage. Interference can arise from in-band or out-of-band emissions, producing transient bursts from impulse-like waveforms or narrowband/wideband interference, which may distort gain or frequency response, obscure received signals, or cause subtle alterations near the noise threshold. SIM strategies use time-, frequency-, or spatial-domain filtering to mitigate such effects. Recent research has explored DL and RL methods for SIM. DL approaches utilize techniques such as CNNs, ANNs, LSTMs, and other models for spectrum monitoring, signal representation, modulation classification, interference identification, and even SIC [85].
DL techniques, such as CNNs and LSTM networks, have been explored for various SS tasks, including spectrum monitoring, modulation classification, and signal detection in non-orthogonal systems, through advanced interference cancellation methods [86]. A typical intelligent SIM method, as illustrated in Fig. 10, leverages DL algorithms to predict the number of devices from overlapped In-phase/Quadrature data and classify them, accordingly, thereby improving medium access efficiency; similar approaches have been extended to RIS-assisted communications [87]. RL techniques, on the other hand, adapt to dynamic environments by selecting frequency subbands based on current and past observations [88]. These methods, including those employing double thresholds and weighted energy detection, have demonstrated improved SIM compared to traditional cooperative SS and energy detection approaches. The spectrum interference management system presented in Fig. 12 is a proactive system for spectrum utilization, built on a physical-layer analysis of signal sources, such as Device 1 and Device 2. The system’s input depends on the uncoded In-phase and Quadrature baseband sequences, which are important because they preserve the distinctive RF fingerprints of a device’s hardware and modulation scheme. A Learning Algorithm (e.g., a CNN or RNN) then processes this raw data to achieve two critical tasks: accurately classifying signals to detect the source and type of modulation, and detecting and predicting interference. This leads to an Inference Engine that not only identifies instantaneous spectrum collisions but also predicts future cases of interference based on the patterns learned. The system allows instant collision management by enabling signal-level analysis at high speed and granularity, so that intelligent mitigation of the situation, like dynamic power control or frequency hopping, can be taken to ensure that there is a minimum separation of spectrum, and substantially enhances the likelihood of communication links remaining operational in the face of unexpected interference compared to conventional reactive mechanisms.

Figure 12: AI-enabled spectrum management for interference detection and collision mitigation
6.4 Intelligent Spectrum Utilization
Low-frequency bands will remain essential for providing wide-area coverage, primarily because of their favorable propagation characteristics, especially in NLOS environments, compared to higher-frequency bands. In the coming years, significant portions of the new spectrum are expected to be allocated to support 5G and its future iterations. This expansion is anticipated to nearly exhaust the available sub-6 GHz spectrum. Consequently, by the time 6G networks emerge, more innovative and flexible spectrum utilization strategies will be required—even within licensed bands—to enhance localized access and enable seamless coexistence among diverse users.
Telecommunication operators may be compelled to engage in spectrum sharing, both among themselves and with private or enterprise networks. Furthermore, within the infrastructure of a single operator, various generations of mobile technologies will likely coexist and share spectrum resources. The evolution of radio technologies supporting multi-band operations, along with the integration of intelligent methods such as DRL, can drive efficient, autonomous spectrum-sharing solutions. As advanced beamforming and network densification gain traction, spectrum usage is becoming increasingly localized, enabling greater reuse and promoting effective coexistence within CR and spectrum-sharing frameworks [89].
6.5 Harnessing Emerging Spectrum Bands
To meet the growing demand for higher data rates and capacity, mobile networks are shifting toward higher-frequency bands. 5G already utilizes 3–6 GHz and 24–50 GHz, with future extensions up to 114 GHz [90]. These high-bandwidth designs enable advanced PL designs but face challenges such as limited propagation, high absorption, and high hardware costs. Solutions include massive antenna arrays and narrow-beam technologies to enhance signal reach and power, though urban reflections can support NLOS coverage [91]. 6G is expected to explore sub-THz bands (114–300 GHz) for backhaul, short-range communications, and edge data center connectivity. Advances in mmWave tech—like RFICs, on-chip antennas, hybrid beamforming, and AI-driven waveform designs—will support this shift [92]. Meanwhile, the improved and lower-cost massive MIMO will expand the use of mmWave and sub-6 GHz (cmWave). As 6G evolves, current high bands may be reclassified as mid bands, while sub-GHz frequencies will remain key for wide-area coverage. Due to the limited availability of low-band spectrum, dynamic AI-based spectrum access across time, frequency, and space will become increasingly essential. VLC may serve niche roles, but is unlikely to replace radio technologies.
6.6 Intelligent Traffic Control in 6G Wireless Networks
AI is essential for efficient traffic management in 6G networks, which must support diverse and data-intensive applications such as AR, IoT, and autonomous systems. By leveraging ML, DL, and RL techniques, AI enables predictive traffic analysis, dynamic load balancing, and real-time QoS optimization. Models such as LSTM and Recurrent Neural Networks (RNNs) forecast traffic demand by analyzing historical and real-time data, enabling proactive RA in high-demand areas [93]. RL algorithms, including Q-learning and DQN, adaptively balance network loads and reroute traffic to prevent congestion [94]. AI also supports proactive congestion control through models such as SVM and decision trees, which detect early signs of strain and apply corrective actions [95]. DL and RL methods further enable traffic classification and prioritization based on application needs, optimizing key QoS metrics such as latency and packet loss.
In dense environments, multi-agent RL allows distributed coordination among BSs and edge nodes for decentralized load balancing [96]. AI-driven traffic offloading strategies analyze user behavior and network load to offload non-critical data to nearby edge resources, reducing latency and improving efficiency. Additionally, energy-aware traffic management powered by NNs and GA helps minimize power consumption during low-traffic periods [97]. In Open RAN-based 6G systems, integrated frameworks using LSTM, heuristic flow control, and multi-agent DRL enable intelligent, layered traffic control, supported by real-time and non-real-time radio intelligent controllers for closed-loop adaptation [98]. Overall, AI enhances traffic prediction, congestion control, and resource efficiency, making it a foundational tool for sustaining high QoS in dynamic 6G environments.
Table 5 provides a comprehensive overview of the role of AI in enhancing SS and managing 6G MA systems. It defines essential AI technologies, with DL, RL, and SL among the most advanced, applied to enhance spectrum monitoring, dynamic access, interference control, and traffic management. Each application offers specific benefits, including enhanced SE, real-time decision-making, reduced latency, and improved QoS. However, some issues that need to be addressed include computational complexity, interference in densely populated networks, and the need for adaptive learning. The table relates these AI solutions to the provisions of 6G technologies, such as CR, RIS, THz bands, and Open RAN. It establishes deployment scenarios that can vary, including urban IoT clusters, AV networks, smart factories, and dynamic spectrum-sharing environments. The relevance is demonstrated through practical use cases, such as LTE-Wi-Fi coexistence, cooperative SS, AR/VR traffic control, and RIS-assisted communication. In general, the table suggests that AI could serve as the basis for the intelligent, adaptive, and efficient use of spectrum in 6G networks.

6.7 Practical Scenarios and Comparative Analysis
As 6G gains momentum, it is ushering in a new era of hyper-connected, intelligent systems that span multiple fields. The power of 6G is transforming technologies like UAVs, autonomous transportation, data science applications, and smart robotics. All of these spheres are characterized by the high potential of AI and seamless 6G integration, which entails more intelligent decision-making, real-time responses, and independent operation. Nevertheless, although 6G introduces impressive new improvements, every industry is also associated with technical and operational challenges. To capture the disparities between them, the subsequent comparison aims to highlight the major technological underpinnings, performance enhancements, and persistent challenges of these 6G-powered solutions, and to identify how they can influence the development of future-oriented digital infrastructure.
6.7.1 Unmanned Aerial Vehicles in the 6G Era
UAVs, including drones, balloons, and other aerial platforms, can operate autonomously or remotely for diverse applications such as military operations, surveillance, search and rescue, and disaster management. With the advent of 6G and AI, these UAVs are evolving into intelligent systems capable of detecting events such as fires, traffic incidents, and security breaches in real time. This capability enables enhanced support for public safety operations, including law enforcement and emergency response. Advanced UAVs, equipped with Drone-to-Drone and Drone-to-Infrastructure communication interfaces, will leverage 6G to enable high-speed, low-latency data sharing and collaboration. For instance, swarm drones may be deployed in coordinated military operations or for border surveillance, utilizing ultra-high-definition video streaming. 6G’s non-cellular and AI-driven architecture will ensure seamless mobility, even as UAVs transition between coverage zones.
Cloud-based intelligence will further empower drones with autonomous decision-making capabilities. Classification of drones based on altitude and purpose—such as high-altitude and low-altitude platforms—will benefit from the robust communication capabilities of 6G, enabling wide-area connectivity and persistent coverage. The convergence of 6G with UAV systems also promises to enhance existing technologies, such as closed-circuit television, by improving data interpretation and response accuracy. As drone technology becomes more affordable and compact, its applications will expand into commercial areas, including aerial photography, delivery, and real-time monitoring. Ultimately, UAVs, supported by 6G, are set to become essential components of next-generation wireless networks and intelligent infrastructure [99].
6.7.2 6G-Enabled Autonomous Transportation
The development of autonomous and connected automated vehicles is being significantly influenced by the advancement of 6G communication technologies. Today, the era is witnessing the increasing deployment of self-driving cars, which are set to enhance fuel efficiency, optimize routes, and improve operational performance. These intelligent vehicles, supported by 6G, will offer real-time services and economic benefits by leveraging AI and computer vision algorithms to ensure seamless connectivity and decision-making [100]. The future of these vehicles depends heavily on AI-driven DSA and on computational efficiency enabled by 6G technology. Enhanced NNs will allow vehicles to interact and share data, resulting in safer, smarter mobility. AVs are designed to operate without human input, relying on a suite of onboard sensors, including radar, sonar, global positioning systems, and inertial measurement units. These systems enable the vehicle to perceive its surroundings and navigate safely through advanced control algorithms. The primary goal of AV development is to enhance safety, as human error is responsible for the majority of road accidents. At the same time, only a small percentage are caused by mechanical failures or poor road conditions.
Connected AVs, on the other hand, not only operate independently but also communicate with other vehicles and the surrounding infrastructure. They use technologies such as Vehicle-to-Vehicle and Vehicle-to-Infrastructure communication to detect environmental features, such as intersections and road curves. Modern connected AV systems employ radar, cameras, and light detection and ranging sensors to detect nearby vehicles, traffic signals, and pedestrians. These inputs are processed by a central computer that controls vehicle dynamics, such as steering, braking, and acceleration [101]. The convergence of these technologies enables improved safety, increased travel time efficiency, and an overall enhanced driving experience. A cloud-based system further enhances the capabilities of connected AVs by learning from past events and adapting their behavior accordingly. With the aid of ANNs, ML, and DL, vehicles are becoming increasingly autonomous and intelligent. The integration of 6G in vehicle communication infrastructure is expected to transform the entire automotive ecosystem—from vehicle production to user experience, enabling a future of fully connected, efficient, and intelligent transportation.
6.7.3 The Role of Data Science in 6G Networks
Data science is a fundamental component of AI, enabling the extraction of meaningful insights from data and the prediction of future trends. With the advanced connectivity and low latency of 6G, AI-driven end-to-end optimization across the entire data science lifecycle becomes feasible. Descriptive data analysis involves summarizing and presenting past data to understand and communicate what has occurred, offering accessible insights without making future predictions. Diagnostic analysis digs deeper by exploring the origins of data and uncovering patterns through techniques such as data mining, correlation analysis, and drill-down methods to determine the causes of observed trends. Prospective data analysis aims to bridge historical and real-time data by leveraging AI within 6G networks to anticipate future conditions and support timely, informed decisions. It helps organizations extract critical insights in real-time and build accurate forecasting models. Predictive analysis, in particular, leverages historical and current data through AI and ML to forecast outcomes with high accuracy [102]. These methods support informed decision-making across industries by projecting behaviors and trends on timescales ranging from milliseconds to years. The data-centric architecture of 6G networks will enhance data availability, support predictive modeling, and drive cost-efficient, intelligent business models.
6.7.4 Artificial Robots Empowered by 6G Communication
The integration of 6G and AI is expected to transform robotics by enabling faster, smarter, and more responsive systems. With powerful edge computing, Ultra-Reliable Low-Latency Communication (URLLC), and support for high-density devices, 6G networks will be able to handle the real-time processing demands of intelligent robots across various domains. Emerging technologies such as edge AI and quantum ML will further enhance robot autonomy and efficiency in embedded systems [103]. As research progresses, 6G is set to become the backbone of future robotic automation.
Robots with Emotional Intelligence aim to replicate human-like interactions by learning from complex data using DNNs. Socially interactive robots will rely on lightweight, user-friendly designs and AI-driven interfaces to serve as caregivers, assistants, and companions. For realistic human–robot interaction, these systems must be programmed with intuitive behaviors and socially adaptive responses. Industrial Robots will significantly benefit from interconnected AI-driven networks that allow autonomous operation without human intervention. Using ML, sensor data, and predictive analytics, robots will improve reliability, safety, and maintenance planning in manufacturing environments. These systems can adapt in real-time, optimizing performance and minimizing downtime. As automation advances, robots are expected to play key roles in sectors such as logistics and healthcare. The latter is projected to reach a global market value of $1.7 trillion by 2025, while logistics is expected to exceed $1.2 trillion driven by growth in e-commerce and retail. Robots in Healthcare will leverage 6G-powered systems, such as the Internet of Intelligent Medical Things, to enable remote surgeries, intelligent mobile clinics, and wearable health monitoring devices [104]. AI will assist medical professionals in analyzing massive datasets to improve diagnosis, treatment plans, and patient outcomes. From robotic surgery to diagnostic systems, healthcare robots will enhance service delivery through ML and real-time analytics.
Robots in Smart Cities will support urban automation by monitoring logistics, enhancing public safety, and managing city infrastructure through real-time data and AI. Industry 4.0 is driving the transition from physical to digital industries, where mobile devices, machines, and robots are interconnected via 6G [105]. Robotics in smart cities will address challenges such as disaster response, food distribution, environmental monitoring, and security, while enabling the creation of new job roles within digital ecosystems. Automated guided vehicles will become part of the robotic framework in urban logistics. Despite high initial costs, AI-powered robots are projected to become more affordable and capable, reducing error rates and enhancing task precision. In summary, 6G will serve as a critical enabler of robotic intelligence across industries, pushing economies toward full automation. Robots will increasingly take on tasks traditionally performed by humans—efficiently, accurately, and with greater autonomy—addressing global challenges in healthcare, industry, urban management, and beyond.
6.8 Comparison of Key 6G-Driven Technologies
Table 6 presents a comparative study of the main areas enhanced by 6G technologies, including UAVs, AVs, and artificial intelligence. The three regions are evaluated based on their technological foundation, performance, and ongoing challenges. With 6G non-cellular communications and real-time data sharing, UAVs play a crucial role in security, surveillance, and disaster response; however, they also pose challenges related to air traffic control and energy efficiency. AVs and connected AVs can utilize URLLC to make driving safer and more efficient; however, issues related to infrastructure requirements and data privacy persist. In data science, 6G enables fast and smart analytics, including both descriptive and predictive modeling. This elevates decision-making and business intelligence while addressing challenges related to data availability and complexity. Finally, 6G-reliant robotics, including emotional, industrial, healthcare, and smart city robots, offer features such as real-time collaboration and self-governing operation but face unacceptably high pricing and integration issues. All these areas emphasize the extent to which 6G and AI are radically transforming digital infrastructure across various sectors.

7 AI-Driven Protocol Design for 6G Multiple Access
The development of 6G networks will involve large-scale, multi-layered, and highly dynamic architectures that must handle massive data, ensure seamless connectivity, and meet diverse QoS requirements across a vast number of devices. To address these challenges, AI techniques—known for their advanced analysis, learning, optimization, and decision-making capabilities—can be integrated into 6G to enable intelligent performance optimization, knowledge discovery, and adaptive control. A proposed AI-enabled architecture for 6G comprises four layers: intelligent sensing, data mining and analytics, intelligent control, and smart applications. This bottom-up structure bridges the physical and social worlds, aligning physical devices and environments with human needs and behaviors.
Fig. 13 presents a four-layer intelligent 6G network architecture developed using AI, providing a unified structure for autonomous operation and service provisioning. It starts at the Sensing Layer, which collects heterogeneous environmental and network-state data, including traffic matrices, CSI, QoS demands, and smart-environment data (e.g., smart-city sensor readings and industrial telemetry). This multimodal stream of data is sent to the Analytics Layer, where advanced, precise ML models, such as DNNs and FL schemes, are used to perform complex inference and training to derive actionable intelligence and forecast network behavior. The resulting intelligence is then fed into the Control Layer, which contains the inner optimization and resource-coordination algorithms, often running on DRL agents. The DRL gents use the analyzed insights to make real-time decisions in managing critical network operations, including setting up dynamic resources, effectively scheduling tasks within the cloud and MEC, and adaptively distributing power. Lastly, the Application Layer leverages the network’s optimized state to deliver high-reliability, low-latency services. This effectively supports service provisioning across a wide variety of vertical applications, thereby justifying the network’s overall functionality and scalability.

Figure 13: AI-enabled architecture of intelligent 6G networks
In 6G networks, sensing and detection form the foundational layer of operations. These networks are designed to intelligently gather data from the physical world through a wide array of devices, including sensors, cameras, drones, vehicles, smartphones, and even human participation. With the integration of AI, the sensing process can efficiently handle vast amounts of dynamic, heterogeneous, and scalable data. This includes key tasks such as monitoring environmental parameters, identifying RF usage, detecting interference or intrusions, and performing SS.
To support URLLC, hallmarks of 6G—high accuracy, robustness, and real-time responsiveness in sensing—are essential. However, the dynamic nature of 6G networks introduces spectrum uncertainty, making reliable and accurate sensing particularly difficult. AI techniques offer practical solutions to these challenges. For instance, fuzzy SVMs and non-parallel hyperplane SVMs are known for their robustness in uncertain environments. CNNs can enhance sensing accuracy with relatively low computational cost, while combining K-means clustering with SVMs enables real-time sensing using low-dimensional training samples. Additionally, Bayesian learning techniques help integrate heterogeneous data into large-scale sensing scenarios. One critical example is SS, a vital method for improving spectrum utilization and mitigating spectrum scarcity. In expansive 6G environments, where numerous devices simultaneously attempt to sense the spectrum, this becomes a high-dimensional challenge. AI-based methods can analyze spectral characteristics and generate adaptive training models that reflect current usage patterns. Specifically, models such as SVMs and DNNs classify each input vector (representing spectral data) into predefined categories.
7.2 Intelligent Data Processing Layer
In 6G networks, vast volumes of heterogeneous and high-dimensional data are generated by numerous connected devices. Efficiently processing and analyzing this data is critical for extracting meaningful insights and enabling intelligent network behavior. This layer focuses on transforming raw data into valuable knowledge through AI-based data mining and analytics techniques. To address the challenges of data storage, transmission, and complexity, dimensionality reduction methods such as Principal Component Analysis (PCA) and isometric mapping are employed. These techniques compress high-dimensional data—such as traffic flows, channel information, or multimedia content—into a more manageable format, improving computational efficiency and filtering out anomalies or irrelevant information. Beyond preprocessing, data analytics plays a key role in uncovering patterns and valuable features across diverse data sources, including the physical environment, cyber systems, and social networks. This enables a deeper understanding of 6G network behaviors and supports applications such as RA, protocol optimization, signal processing, and cloud-based services. For example, analytics-driven models like ISAGUN can learn UAV mobility patterns, predict ground network behavior, and model satellite-to-ground communication channels effectively [106].
The intelligent control layer in 6G networks plays a crucial role by integrating learning, optimization, and decision-making capabilities. This layer enables a wide range of agents—such as devices and BSs—to autonomously learn from data, optimize their operations, and select the most appropriate actions, including power control, spectrum access, routing, and network association. These functions are driven by knowledge extracted from the lower layers and are vital for supporting diverse and dynamic service demands across large-scale, intelligent networks. Learning within this layer involves using or refining existing knowledge to enhance the performance of devices and service nodes. It allows the network to adapt intelligently to application requirements through techniques such as optimal network slicing, PL design, edge computing, and RM. AI models, particularly those based on ML, enable the network to develop features such as self-configuration, self-optimization, self-organization, and self-healing, thereby enhancing its adaptability and efficiency. For example, in post-massive MIMO systems with numerous antennas deployed at mmWave or THz frequencies, learning models such as RNNs can be used to model and mitigate nonlinearities in RF components, thereby enabling energy-efficient beamforming.
Optimization in 6G networks involves adjusting network parameters to achieve global goals, including QoS, user experience, coverage, and connectivity. Traditional optimization methods, which rely heavily on complex mathematical formulations, may not be suitable for the highly dynamic and heterogeneous nature of 6G networks. Instead, AI-based optimization techniques offer a more scalable and flexible approach. These methods enable automatic training of models that can adapt network configurations in real time, optimizing performance without extensive manual intervention. DL, in particular, can facilitate the transformation of traditional architectures into intelligent, software-defined, and virtualized frameworks that support rapid adaptation and efficient resource utilization. Decision-making is another critical component, enabling agents within the network to reason, plan, and choose the best course of action in complex environments. This involves striking a balance between exploration—gathering new information—and exploitation, utilizing existing knowledge for informed decision-making. AI techniques, such as RL, are well-suited for these tasks, helping agents make optimal decisions in scenarios like selecting precoding strategies in mmWave/THz systems, managing routing in dynamic topologies, or implementing flexible spectrum allocation schemes. Through these intelligent control mechanisms, 6G networks can become more agile, autonomous, and capable of meeting the high-performance demands of future communication systems [107].
7.4 AI-Driven Application Layer
This layer is primarily responsible for delivering customized application-level services tailored to users’ diverse needs, while also assessing service performance and feeding evaluation results back into the intelligent control loop. Driven by AI, this layer enables the deployment and management of advanced, smart applications across domains such as autonomous services, smart cities, healthcare, transportation, energy systems, and industry. It also oversees the global coordination of smart devices, terminals, and infrastructure within 6G networks, enabling self-organizing network capabilities. An additional core function of this layer is to evaluate the performance of the services provided. This involves assessing multiple quality metrics, including QoS, quality of experience, the relevance and accuracy of collected data, and the reliability of the insights derived. Furthermore, resource-related metrics—such as spectrum, computational, energy, and storage efficiency—are considered to ensure cost-effective and optimized operations. These evaluations support enhanced RA, enable automatic network slicing, and facilitate the delivery of intelligent, adaptive services across a wide range of 6G applications.
7.5 Intelligent Edge Computing
MEC is a key enabler in 6G networks, offering localized computing, data management, and analytics near end-user devices by integrating with RAN or Software-Defined Networking (SDN). However, due to the dynamic, high-dimensional, and uncertain nature of edge environments, traditional optimization methods, such as Lagrangian duality, may fall short. In contrast, AI offers robust solutions by enabling intelligent data processing, decision-making, and prediction in MEC systems. At the edge level, where computational resources are limited, lightweight AI models are deployed to support real-time applications such as smart agriculture and transportation. RL, particularly in a model-free format, is effective for managing edge resources by learning optimal decisions through interaction with dynamic environments. RL agents can adaptively select actions related to energy use, task scheduling, or RA to maximize rewards like reduced latency, improved reliability, or higher data throughput.
At the central cloud layer, with significantly more computational power, complex AI algorithms can handle large-scale data from multiple edge nodes. DL and AI-based classification techniques are used for tasks such as traffic analysis, behavior prediction, service recognition, and security monitoring. AI-driven clustering methods also optimize MEC server associations, enhancing overall system efficiency. Additionally, DRL is well-suited for high-dimensional, complex decision-making problems at scale, leveraging techniques such as experience replay to improve learning accuracy and performance. Together, these AI-enabled solutions ensure that MEC systems in 6G networks deliver intelligent, adaptive, and high-quality services to edge devices. Fig. 14 shows the technical diagram of an AI-based MEC platform for 6G real-time, efficient resource coordination. This framework has a hierarchical control structure that uses DRL. The information provided by multiple edge applications (for example, smart transportation telemetry, agricultural sensor data, smart device streams, etc.) is first aggregated. The system uses central cloud servers to execute the large-scale, global DRL process, often employing a complex Actor-Critic or similar algorithm. This process consumes large datasets to develop an optimal global policy for resource management. More importantly, this learning policy can then be transferred to RL controllers that would run on edge computing servers. These localized controllers enable real-time, low-latency decision-making through interaction with the local environment. The edge controllers can effectively coordinate the activities of local resources such as dynamic RM (e.g., compute, bandwidth, power) and task scheduling to attain optimal performance measures, e.g., reducing the latency and maximizing throughput of delay-sensitive applications at the network edges of the periphery by using available local inference and local state knowledge.

Figure 14: AI-enabled framework for intelligent and efficient mobile edge computing (MEC)
7.6 AI-Based Mobility and Handover Control
Mobility and handover management are among the most complex challenges in 6G networks, given their highly dynamic, multi-layered, and large-scale nature. Frequent handovers, especially in high-mobility scenarios, demand intelligent solutions to ensure seamless connectivity and low latency. AI techniques are increasingly being integrated into these processes to enable real-time mobility prediction and optimal handover strategies, for instance, with the integration of UAVs into 6G, their high-speed operation results in frequent, unpredictable handovers. AI methods, such as DRL, combine the strengths of DL and RL to make real-time decisions based on evolving mobility patterns. In such systems, UAVs function as learning agents, observing environmental states—such as location, speed, and link quality—and selecting the most efficient mobility or handover actions to maximize performance metrics like connectivity, capacity, and latency.
Beyond UAVs, vehicular communications in 6G also require robust mobility management to support fast-moving users while maintaining service reliability and low delay. Predictive techniques using DL models such as RNNs, ANNs, and LSTMs can learn and forecast movement patterns. These models help reduce handover frequency and failures by anticipating future mobility states. Similarly, fuzzy Q-learning is employed to dynamically optimize handover parameters in response to varying conditions, thereby further enhancing connectivity and service continuity. Together, these AI-enabled solutions enable 6G networks to adapt and learn, handling complex mobility scenarios across UAV and vehicular applications and ensuring stable, high-performance communication services.
7.7 AI-Enhanced Multiple Access Protocol Architectures
The integration of AI-driven SS significantly enhances the performance of MA protocols by improving key aspects, including reducing collisions, increasing throughput, and enhancing adaptability. These improvements are achieved without compromising system reliability, particularly when AI is incorporated into structure-based protocol designs. Furthermore, the combination of AI with technologies such as Hybrid Beamforming (HBF), RIS, and massive MIMO is expected to play a crucial role in enabling intelligent and pervasive 6G services. This section explores and summarizes the structural considerations for the MAC layer, focusing specifically on three main design areas: frame structures, protocol strategies, and practical implementation approaches.
7.7.1 Frame Architecture and Layout
In AI-enhanced MA systems, the design of the frame structure is critical for achieving efficient and reliable communication. A well-designed frame structure supports ULLC processing and flexible bandwidth usage, which are essential for 6G performance demands. Recent proposals have introduced dynamic frame structures that enable flexible scheduling, such as combining FDD and TDD, or allowing variable frame sizes to adapt to individual user needs. For instance, advanced models include hierarchical frame designs with general frames, large frames, and superframes, which may follow fixed schedules or random intervals.
In these systems, each frame serves as a cyclical operational unit in which the chosen MA scheme—such as TDMA, FDMA, CDMA, NOMA, or RA—is configured and executed. AI techniques play a key role in dynamically selecting and adapting the MA strategy based on real-time network conditions. Furthermore, access units and PL multiplexing schemes are determined in coordination with the selected MA protocol, affecting the access pattern, capacity, and data exchange processes. These flexible and AI-driven frame structures lay the foundation for the protocol and implementation strategies that enhance the performance of next-generation MA systems [108]. Fig. 15 illustrates the general structure of an AI-powered MA scheme, a foundational design for improving resource utilization and mitigating complexity in next-generation networks. This architecture integrates a centralized or distributed AI engine—often implemented using DR- or sophisticated DNNs—that serves as the core controller. This engine continuously learns from network data, including parameters such as CSI, instantaneous traffic load, and individual QoS requirements. Based on this learned policy, the AI dynamically performs joint optimization across multiple dimensions: user access scheduling (determining which users transmit when), RA (optimizing sub-carrier, power, and bandwidth assignment), and proactive interference management (e.g., adaptive precoding or null steering). By replacing fixed or heuristic MA protocols with an adaptive, data-driven policy, the system ensures that multiple users efficiently share communication resources, achieving superior spectral efficiency and reliability compared to traditional orthogonal or static access methods.

Figure 15: General structure of an AI-powered multi-access scheme
7.7.2 Protocol Scheme Architecture
AI has significantly influenced the design of MAC protocols in wireless networks, particularly as 6G demands evolve. These AI-empowered MAC designs are generally categorized into centralized, distributed, and hybrid approaches. In centralized schemes, an intelligent controller coordinates channel access using AI methods such as DL and RL to minimize collisions and allocate resources efficiently. Traditional multiple access techniques, such as TDMA, FDMA, CDMA, and NOMA, benefit from AI by enhancing scheduling, reducing latency, and improving spectrum utilization. However, these centralized methods can be less flexible and inefficient under low traffic conditions due to synchronization overhead and limited adaptability [109]. Distributed schemes, on the other hand, enable devices to access channels autonomously without a central controller. These protocols often use carrier-sensing mechanisms, as seen in Carrier Sense Multiple Access (CSMA)-based systems. AI techniques, particularly RL, are applied to optimize internal MAC parameters such as contention windows, backoff timers, frame lengths, and transmission rates. While distributed designs perform well in low-density scenarios, their efficiency diminishes as network traffic increases, leading to higher collisions and increased energy consumption.
To address the limitations of both approaches, hybrid schemes have emerged. These combine centralized control and distributed access, dynamically switching between the two based on real-time network conditions. AI plays a pivotal role in enabling this switch, ensuring high adaptability without sacrificing performance. Several studies have proposed intelligent hybrid MAC protocols that simultaneously optimize scheduling and contention mechanisms. These protocols are beneficial in environments involving mixed technologies, such as Wi-Fi, IoT, RIS, and NOMA, where they help balance traffic load, improve throughput, and maintain low latency. Overall, AI-driven MAC protocol design is proving essential for building adaptive, efficient, and intelligent communication systems in the 6G era. Fig. 14 shows three categories of AI-enabled MA. Overall, centralized access follows the pattern of central stream control of resources and users. In distributed access, users or edge nodes perform access independently without referring to a central entity. Hybrid access combines the two methods, using centralized control to manage the network and decentralized intelligence to make local decisions. Such an application is favorable due to its flexibility and efficiency in developing dynamic networks. Fig. 16 illustrates how the architectural design of AI-enabled MA schemes can be divided into three basic deployment paradigms: centralized, distributed, and hybrid access. These paradigms are characterized by the location of the AI engine, the implementation of control signaling, and the resulting latency-efficiency trade-off. The Centralized MA architecture includes a single, strong AI engine (e.g., a high-dimensional DRL agent) located in the core network controller. This engine uses global CSI to implement complex resource allocation policies, sacrificing spectral efficiency for high control overhead and critical latency. In contrast, the Distributed MA scheme deploys simpler, local AI agents on individual UEs or Edge Nodes and makes local access decisions based on local CSI only, thus maximizing system robustness (with minimal signaling latency). Still, it can result in suboptimal resource utilization. The Hybrid MA scheme combines the advantages of both previous schemes, creating a hierarchical control structure with centralized intelligence (large-scale, worldwide optimization over networks, such as network slicing) and decentralized AI controllers (fast, real-time decision-making, such as power adjustment). This hybrid model is highly desirable for dynamic networks because it provides the essential flexibility and efficiency needed to balance optimal spectral efficiency with minimum latency overhead.

Figure 16: Description of three types of MA schemes: centralized, distributed, and hybrid access architectures
Fig. 17 illustrates a closed-loop framework for the RL-based adaptive MA protocol design, centered on a DRL agent to achieve dynamic network optimization. The system models the MA resource allocation problem as a Markov Decision Process. The DRL agent receives its state vector from various MAC blocks and the current network environment, including critical parameters such as instantaneous CSI, traffic load (demand), and QoS priority levels. Utilizing these inputs, the DRL agent, acting as the policy network, determines the optimal action set for protocol adaptation, which involves fine-grained resource orchestration decisions such as dynamic assignment of time, frequency, power, and coding resources. The consequences of these actions are quantitatively assessed by a Throughput Evaluation Module, which computes the resulting performance metrics (e.g., SE or aggregate throughput). This metric is then fed back to the DRL agent as a scalar reward, guiding the agent’s policy update via gradient-based methods. This continuous, closed-loop learning system enables the MA protocol to dynamically respond to volatile network conditions and iteratively optimize its resource allocation strategy, ensuring sustained performance improvement and adaptation.

Figure 17: RL-based adaptive multiple access protocol design for dynamic network optimization
The practical implementation of AI-driven MA protocols can be categorized into two main architectures: centralized and distributed. In a centralized setup, a central controller—such as a BS or AP—manages user access and RA through AI models. The operational frame is divided into three segments: pilot, computation, and data transmission periods. Users send pilot signals to enable the controller to estimate channel conditions, allocate resources, and configure access parameters. To manage the complexity of real-time decision-making, DL models are pre-trained offline and then utilized during operation for efficient inference, enabling intelligent scheduling across time, frequency, and power domains [110]. On the other hand, the distributed framework allows each user device to manage its access decisions independently. Users use RL techniques to sense channels, compute required resources, and determine MAC parameters in real time. This decentralized approach combines CSMA and FDMA principles, following the IEEE 802.11 DCF protocol to dynamically adjust to varying network conditions. The distributed scheme is beneficial in scenarios where centralized coordination is either infeasible or inefficient due to the network’s scale or dynamics.
7.8 Networks with Perceptive Intelligence
A fundamental requirement for industrial automation is high-precision localization, which is traditionally achieved through technologies such as real-time kinematic global navigation satellite systems. However, these methods fall short in indoor environments where satellite visibility is limited. Current indoor localization techniques, such as those leveraging ultra-wideband or Bluetooth Low Energy, require separate infrastructure, resulting in increased installation and maintenance costs. As a response, 5G introduces localization enhancements that enable both URLLC and accurate positioning within a single network framework. Looking ahead, 6G is poised to transform communication systems into unified platforms for both data transmission and environmental sensing. By integrating advanced localization with sub-centimeter precision, even in NLoS indoor settings, 6G networks will significantly extend the sensing coverage and accuracy. This evolution will be powered by AI/ML-driven channel charting, massive MIMO, and multi-sensor data fusion, combining signals from RF, vision, and other modalities.
Beyond localization, 6G will enable passive object imaging by leveraging sensing-friendly waveforms, such as chirps, beam-sweeping mechanisms, and coordinated transmission from large antenna arrays. The shift to sub-THz and THz frequencies, with their wide bandwidth, will enable millimeter-level resolution, opening new possibilities in fields such as industrial defect detection, medical diagnostics, and security systems. Crucially, the fusion of multimodal sensing with cognitive intelligence will enable networks to infer human behavior, preferences, and even emotional states. This convergence of communication and perception is expected to give rise to what may be considered a sixth sense of net-work—an anticipatory system capable of interacting with the physical world in a highly intuitive and context-aware manner.
Emerging industrial IoT applications require ultra-low latency (as low as 1 ms) and extreme reliability (up to 99.9999999%), capabilities currently supported in 5G through techniques such as mini-slots, grant-free access, and redundant multi-connectivity. However, advanced industrial use cases, such as Sercos or EtherCAT replacements, require even lower latencies (~100 µs) at gigabit data rates and heightened reliability to prevent equipment downtime from consecutive packet losses. 6G is envisioned to meet these stringent requirements more efficiently. Contrary to previous assumptions, mmWave signal propagation proves viable on factory floors, enabling ULLC and high-throughput communication using wide bandwidths. To enhance reliability, 6G will incorporate multi-path transmissions via wireless relays and device-to-device (D2D) links. AI-driven predictive beamforming will further reduce link instability. Additionally, 6G aims to push networking toward ultra-sustainability with the concept of zero-energy IoT devices. These devices—akin to passive RF identification systems—are envisioned to operate for decades without battery replacement, leveraging innovations such as ultra-low power consumption, EH from network signals, and durable embedded energy storage. Applications may include long-term structural health monitoring of infrastructure, such as bridges and tunnels, paving the way for truly autonomous, maintenance-free wireless systems.
7.10 Integrated 6G Architecture and Network Subdomain
Cellular network architecture in earlier generations was primarily developed to extend voice and internet services to mobile endpoints. With 5G, the focus shifted to supporting industrial environments through innovations such as time-sensitive networking bridge functionality. However, for 6G to become a fully integrated solution capable of replacing wired communication systems in such settings, it must deliver deterministic, wire-grade reliability across diverse connectivity scenarios. These scenarios range from static, isolated devices to locally interacting systems and highly mobile entities, such as robot or drone swarms, which must maintain communication both within the group and with the network when they are separated. To address these needs, 6G is expected to introduce semi-autonomous sub-networks capable of maintaining essential services even during disconnection from the broader network. This approach will require robust multi-path connectivity, leveraging both infrastructure and opportunistic D2D communication, potentially leading to an architecture that transcends traditional cellular boundaries. Integrating these sub-networks into the broader 6G framework offers several benefits: ensuring consistent, high-speed, low-latency, and resilient communication; enforcing comprehensive security and resilience policies at the device level; and enabling dynamic allocation of service execution between edge cloud systems and local devices within the sub-network. Furthermore, time-sensitive communications, initially facilitated in 5G through integration with time-sensitive networking, will be natively supported in 6G. This enhancement will enable time-critical applications to operate reliably over larger geographical areas, even in mobile scenarios.
7.11 Artificial Intelligence-Generated Content (AIGC)
Artificial Intelligence-Generated Content (AIGC) encompasses any digital content—such as text, images, audio, or video—produced by AI models. These models are trained using extensive datasets, allowing them to learn patterns, styles, and structures inherent in the data. With this understanding, the AI can then create new, original content that mirrors the characteristics of the input data. AIGC serves a wide range of fields, including media production, artistic design, education, and product development. It offers the advantages of automating content generation, conserving time and effort, and exploring creative avenues that may extend beyond human capabilities.
A notable development in this domain is the emergence of mobile AIGC, which brings generative AI capabilities directly to mobile platforms. Thanks to improvements in mobile processors, memory optimization, and on-device ML, it’s now possible to run Generative AI (GAI) models on smartphones and similar devices. This enables real-time content creation and editing without relying heavily on cloud services or internet connectivity, enhancing accessibility and responsiveness. Unlike traditional AIGC, which focuses on large-scale content production, mobile AIGC prioritizes personalized, on-the-go applications, such as real-time image/video editing and AR integration.
GAI also enhances AIGC applications—like ChatGPT—within XG wireless networks. By placing AI models at the edge of the network, rather than solely on centralized cloud infrastructure, both latency and bandwidth consumption are significantly reduced. Moreover, GAI can generate synthetic training data while safeguarding user privacy, enabling innovative and secure AIGC implementations in wireless environments. This distributed approach to AI deployment could transform the integration of generative content into future communication systems [111].
7.12 Integrated Sensing and Communications (ISAC)
One of the core visions of 6G is the seamless integration of communication and perception, enabling intelligent, full-spectrum applications. This is driven by ISAC, which unifies sensing and data transmission into a single, resource-efficient framework. By combining these traditionally separate systems, ISAC enhances the utilization of hardware, spectrum, and energy while providing improved environmental awareness across networks, devices, and services. Historically, sensing and communication evolved independently. However, recent advances—such as the shared use of high-frequency antennas and similar signal-processing techniques—have brought them closer together. The roots of ISAC date back to the 1960s; however, current developments have led to significant theoretical and practical innovations, including OFDM-based implementations and applications in vehicular communications. Despite its promise, ISAC faces technical hurdles: precise channel modeling, optimal frequency selection, accurate measurements, and efficient waveform design. Overcoming these requires a cohesive approach to system architecture and signal management. Standardization efforts, such as IEEE 802.11bf and 3GPP initiatives, reflect the growing importance of ISAC. In future networks, ISAC will turn infrastructure components like BSs, UEs, and reconfigurable surfaces into active sensors. These elements can detect location, motion, and propagation conditions, enabling smarter and more responsive networks. ISAC supports key 6G use cases—such as autonomous transport and smart cities—by providing real-time, accurate environmental data, thereby improving both service quality and energy efficiency.
Furthermore, integrating GAI with ISAC enhances its capabilities. GAI can predict environmental shifts, fill in incomplete sensing data, and improve CSI, thereby enabling robust communication even in dynamic or degraded conditions. This synergy promises smarter, adaptive, and more secure network operations in the 6G era [112]. Fig. 18 shows the network architecture of an ISAC system that co-optimizes RIS to co-optimize dual capabilities. The ISAC BS is the transceiver hub and is responsible for both communication and sensing waveforms, sharing a single spectral and time resource. The communication point of departure is the ISAC-BS, which serves many UEs, including User 1 and User K. More importantly, the RIS is a passive, controllable reflector array that uses the dynamically adjusting phase and amplitude reflection coefficients of the reflector elements to achieve passive beamforming. This intelligent reflection boosts signal quality (e.g., the Signal-to-Interference-plus-Noise Ratio) and improves coverage for users, especially those with NLoS. At the same time, the sensing link enables the ISAC-BS to monitor a target (e.g., a drone) by analyzing the echoes of the transmitted signal. RIS also contributes to sensing by passively managing the scattering environment to enhance the target radar cross section and improve the geometry for better localization. This architecture demonstrates the synergistic nature of the RIS, which is smart enough to leverage the propagation environment to enhance both communication and sensing functionalities in the network simultaneously.

Figure 18: ISAC-enabled network architecture integrating RIS into joint communication and sensing
7.13 Semantic Communications (SemCom)
Semantic Communication (SemCom) is an emerging paradigm that shifts the focus from transmitting raw data bits to conveying the underlying meaning of information. This is particularly vital in 6G networks, where efficient bandwidth utilization, low latency, and enhanced user experiences—especially for data-intensive applications such as the metaverse and AI services—are paramount. SemCom leverages ML and information theory to extract essential semantic content, significantly compressing data while retaining its intent. It dynamically adapts transmission based on network conditions and user context, ensuring critical information reaches the receiver reliably. At its core, SemCom involves semantic encoders and decoders that use shared Knowledge Bases (KBs) to interpret and reconstruct messages meaningfully. These KBs enable the system to account for context and user-specific background, facilitating effective communication even among heterogeneous users. A significant challenge in SemCom is designing and training the encoder-decoder pair, a task that is typically resource-intensive. However, GAI offers a promising solution by enabling efficient semantic decoding without joint training. GAI also enhances message reconstruction, data compression, and context-aware interpretation, making communication systems more resilient and efficient.
Historically rooted in Shannon’s classical information theory and inspired by Weaver’s call to address semantic meaning, semantic communication has undergone significant evolution. Recent advancements have applied SemCom to image, speech, and text transmission, addressing bottlenecks in conventional end-to-end systems. Still, SemCom faces hurdles, including the precision of semantic models, the design of error-tolerant mechanisms, and the implementation of lightweight, secure, and adaptable frameworks for resource-constrained devices. Key concerns also include secure semantic sharing, tamper-resistance, and user privacy protection. Despite these challenges, SemCom holds great promise for revolutionizing future wireless communication networks [113]. Fig. 19 shows the technical structure of a SemCom system, which uses a specialized encoder-decoder architecture to optimize communication among heterogeneous users (e.g., drones, vehicles, human operators) by focusing on the meaning transmitted rather than the bits. It works by taking multimodal input (e.g., raw images, text, sensor data) on the left and processing it in KBs, which are typically represented as local knowledge graphs or pre-trained models. Its essence is to extract and compress the latent knowledge/semantic representation of the input, select irrelevant syntactic information, and discard it. This very compact semantic representation is subsequently sent over the bandwidth-constrained communication channel. The Semantic Decoder on the receiving side deciphers the compressed semantic symbols. It communicates with the respective collective or correlated KBs to restore the original meaning or determine the intended action, rather than perfectly restoring the original data. The design capitalizes on the contextual awareness offered by the KBs and the rich multimodal input, significantly reducing transmission overhead. It maintains communicative effectiveness, making the system durable and efficient in challenging or bandwidth-limited conditions.

Figure 19: Architecture of a SemCom system with a semantic encoder–decoder
7.14 Integrating Blockchain in Future 6G Architectures
Blockchain technology, initially popularized by Bitcoin, has attracted considerable attention across academic and industrial sectors. At its core, blockchain operates as a decentralized public ledger within a peer-to-peer network, where each block in the chain is linked to a predecessor via cryptographic hashes. Every block includes key elements such as a version number, timestamp, transaction data, and a Merkle root that encapsulates all transaction hashes. New blocks are validated and added to the chain through consensus mechanisms, such as Proof of Work, wherein miners solve complex computational problems and broadcast validated blocks to the network. The inherent features of blockchain—such as decentralization, transparency, immutability, and robust security—make it a reliable solution for ensuring data integrity and trust. However, its scalability remains a notable limitation, particularly in terms of throughput, storage requirements, and network overhead. In the context of 5G and emerging 6G systems, blockchain has demonstrated promising applications in edge computing, network slicing, network function virtualization, and device-to-device (D2D) communication. It also enhances services across various sectors, including smart cities, healthcare, UAV operations, and intelligent transportation, by improving data management, resource sharing, security, and privacy. Furthermore, mobile networks can support blockchain functionality by providing the necessary computational and storage capabilities to enable tasks such as encryption, hashing, and consensus mechanisms. The integration of blockchain in 6G networks is expected to significantly enhance the infrastructure by providing greater flexibility, improved security, and operational efficiency [114].
7.15 Sustainable Devices and Passive Communication for 6G
Energy-autonomous devices, often called passive or energy-neutral, derive power from their surroundings, primarily through ambient RF signals. These devices typically use capacitors charged by RF energy and employ backscatter communication, in which the signal is reflected and modulated by varying the antenna’s load impedance. This modulation enables data transmission without the need for active radio generation, making communication highly energy efficient [115]. Such technology supports large-scale applications, including asset tracking and environmental monitoring in industries such as healthcare, manufacturing, and smart urban infrastructure. A major advantage is the elimination of batteries, which reduces maintenance needs and addresses environmental concerns associated with battery disposal. Nonetheless, backscatter communication has limitations, particularly in terms of signal strength and link budget efficiency, where the received signal power diminishes with the square of the path gain factor (β2). Overcoming this requires advanced antenna technologies, such as directional antennas or antenna arrays, to mitigate losses. The integration of massive MIMO, a cornerstone of 5G and 6G networks, enhances communication with these passive devices by offering array gains that scale with the number of antennas. Despite its potential, scalability and deployment costs remain critical issues. Mass deployment of passive devices could entail significant costs, necessitating affordable, efficient design strategies. Initiatives like H2020-REINDEER are actively addressing these challenges by developing scalable solutions for energy-neutral communication systems [116].
7.16 Emerging Innovations Driving 6G Networks
Several forward-looking technologies are set to define the XG 6G wireless landscape. One such innovation is wireless information and energy transfer, which allows the simultaneous transmission of energy and data. This advancement can power battery-less devices—such as sensors and wearables—while maintaining continuous communication, making it particularly valuable for applications like real-time physiological monitoring. Another critical enabler for 6G is the integration of sensing with communication. This fusion is vital for supporting intelligent, autonomous wireless systems by enabling the network to perceive and adapt to its environment. Despite challenges such as managing large volumes of sensor data and dynamic resources, this convergence enhances situational awareness and system responsiveness. Dynamic network slicing is also expected to play a key role in 6G. Enabled by technologies such as SDN and network function virtualization, it allows service providers to create virtual networks tailored to specific use cases. This flexibility enables efficient management of diverse, large-scale network demands across various applications [117]. Lastly, proactive caching addresses the growing downlink traffic demands by pre-storing popular content closer to the users. Through intelligent caching strategies and efficient RA, this technique significantly enhances data delivery speed and overall user experience, particularly in data-intensive environments.
7.17 Integrated Review of 6G Functional Layers and Innovations
To provide a holistic perspective on the diverse technologies shaping the 6G landscape, Table 7 offers a structured comparison across multiple dimensions. It highlights each feature’s core functionality, associated benefits, enabling technologies, practical applications, existing limitations, and potential future directions. This comparative analysis enables clearer insights into how various technological layers interact, complement, and evolve within the 6G ecosystem. The goal is to aid researchers and network designers in identifying key synergies, trade-offs, and research gaps across the integrated technological stack of XG wireless communication systems.

7.18 Mapping Enabling Technologies to Computational Models and Applications
Addressing a significant research gap in the literature, Table 8 offers a systematic view of how enabling technologies are used with the most appropriate computational models and application scenarios in 6G networks. Examples include blockchain, which excels at edge-computed, real-time spectrum sharing and traffic management, and cloud deployment, which can support scalable, secure data storage and auditing, with hybrid models facilitating secure, scalable integration. IoT benefits from edge analytics for anomaly detection, cloud resources for large-scale aggregation, and MEC for low-latency responsiveness in smart cities, healthcare, and industrial automation. ML, DL, and RL represent AI methods that demonstrate strengths in fast inference, cloud-based training, and hybrid architectures, such as federated learning, to achieve privacy-preserving optimization in applications like SS and autonomous driving.

MEC is necessary for processing data in real time near users, particularly for UAV control and AR/VR, and must be coordinated with cloud resources to scale effectively. RIS enhances SE and coverage through real-time edge reconfigurations, and AI-assisted joint optimization between the cloud and edge makes it even more helpful. Quantum computing is too immature to be of interest at the edge, but it can help provide secure key distribution, optimization, and AI acceleration in cloud/hybrid environments. Lastly, AI+MEC evolves into edge intelligence—a powerful paradigm for real-time, situation-aware decision-making in connected cars, intelligent manufacturing, and IIoT—albeit with challenges related to lightweight model engineering and heterogeneity. Collectively, this mapping will help understand the best ways in which the edge, cloud, and hybrid paradigms can support each enabling technology, identify synergies, and reveal the most vital challenges to be addressed in the future.
8 Optimization Techniques with AI
Compared to conventional optimization methods, AI-based solutions offer superior efficiency in reducing computational complexity and enhancing resource utilization, particularly in dynamic or uncertain environments. When integrated with technologies like MEC or Wireless Power Transfer (WPT), AI further ensures timely data processing and improved energy efficiency in 6G systems. As 6G networks evolve, AI plays a central role in optimizing communications by meeting the growing demands for high speed, extensive connectivity, and real-time adaptability. Key areas where AI drives these improvements include network slicing, spectrum allocation, and traffic management [118].
8.1 AI-Driven Network Slicing in 6G Wireless Networks
Network slicing is a critical capability in 6G networks, allowing the creation of multiple virtualized networks—or slices—over a shared physical infrastructure, each tailored to specific services, applications, or user demands. This approach supports efficient RA, performance optimization, and enhanced security. However, managing these slices in real time amid fluctuating network demands introduces significant complexity. AI addresses this challenge by enabling dynamic, intelligent, and automated slice management. Leveraging techniques such as ML, DL, RL, and FL, AI transforms network slicing into an adaptive and self-optimizing process [119]. AI-driven models can continuously monitor and analyze traffic patterns, enabling dynamic slice configuration and allocation. For instance, ML algorithms predict traffic spikes and adjust bandwidth, latency, and processing resources accordingly, ensuring that critical services, such as autonomous driving or telemedicine, receive priority access. Predictive models, such as LSTM and RNNs, anticipate user behavior and future demand, proactively adjusting slices to prevent congestion and enhance service reliability [120]. AI also enhances QoS by continuously monitoring performance metrics, such as latency and error rates. RL-based systems refine slice configurations by rewarding optimal performance and penalizing inefficiencies, thus improving user experience.
In addition, AI enhances resource utilization and cost efficiency by analyzing usage trends to minimize waste and allocate resources only where necessary. Metaheuristic approaches, including GA and Particle Swarm Optimization (PSO), help identify optimal configurations, particularly during low-demand periods [121]. Security is another critical area where AI plays a significant role. Using ML-based IDS and AD systems, AI can isolate compromised slices to prevent threats from spreading across the network, ensuring robust protection for sensitive services like healthcare or financial transactions. AI also facilitates real-time automation and self-optimization. RL and DQN allow slices to adapt to changing conditions autonomously, minimizing human error and ensuring seamless performance [122]. Furthermore, FL supports distributed slice management by allowing AI models to train locally across edge nodes without sharing sensitive data. This approach enhances privacy, minimizes latency, and ensures localized adaptability across various geographic or application-specific domains. Overall, AI plays an indispensable role in optimizing 6G network slicing by enabling real-time decision-making, predictive RA, and enhanced QoS. As demonstrated in recent studies, integrating LSTMs for traffic prediction and RL for strategy refinement enables networks to efficiently manage resources while maintaining scalability and resilience across diverse scenarios [123]. This intelligent slicing not only enhances user experience but also ensures that 6G networks meet the demanding requirements of XG applications such as immersive VR, autonomous systems, and intelligent IoT.
8.2 AI for Resource Allocation in 6G Networks
Efficient RL is fundamental to the performance of 6G networks, which must support a wide range of services, including massive IoT deployments, AVs, and immersive VR. These diverse applications impose varied and stringent demands on bandwidth, power, and computational capacity. AI offers powerful tools for intelligent DRA, enabling networks to adapt in real-time to changing traffic conditions, optimize energy use, and meet strict QoS requirements. AI-based algorithms play a central role in RM by analyzing network data and predicting future demands. ML techniques, including SL and USL, are used to detect traffic patterns and anticipate congestion. Algorithms like SVM and KNN analyze historical and real-time data to forecast user behavior and allocate resources proactively [124]. DL models such as CNNs and LSTMs further enhance these capabilities by processing high-dimensional traffic data to identify congestion hotspots and predict future load trends. These models enhance bandwidth distribution, routing, and device management, thereby improving efficiency and network stability.
In addition to ML and DL, heuristic and metaheuristic algorithms, including GA and PSO, are also employed for complex RA problems. GA iteratively improves resource distribution by simulating natural selection, while PSO uses cooperative behavior among agents to identify optimal solutions, such as minimizing latency or balancing power consumption [125]. These algorithms are beneficial for solving multi-objective optimization problems in dynamic environments. RL provides a framework for autonomous and adaptive resource distribution. It models the problem as a Markov Decision Process, where the system learns optimal actions—such as reallocating bandwidth or adjusting transmission power—by interacting with the environment and receiving rewards based on performance improvements. Q-learning enables the system to construct a table that maps actions to expected rewards, whereas DQNs use DNNs to generalize across complex, large-scale network states. These approaches allow precise, scalable decision-making in real time.
MARL extends this capability by allowing multiple agents, such as BSs or edge routers, to learn collaboratively. Each agent makes decisions based on local information while contributing to global resource optimization tasks, such as spectrum sharing and load balancing. MARL is especially effective in dense, heterogeneous network environments, where decentralized coordination is essential to maintain efficiency and reduce interference. Overall, AI-powered RA techniques empower 6G networks to respond intelligently to varying demands, deliver consistent performance, enhance scalability, and improve energy efficiency. These adaptive, data-driven strategies are crucial to building a robust and responsive infrastructure for the next generation of wireless communication.
8.3 AI for Energy Efficiency Optimization in 6G
As 6G networks expand in scale and device density, energy efficiency becomes critical. AI provides intelligent solutions to optimize energy use while maintaining service quality. Energy-aware ML models, such as NNs, can predict traffic patterns and adjust network parameters during low-demand periods to reduce power consumption, enabling BSs to switch to low-power modes and lower operational costs. Additionally, AI allows dynamic resource scaling by monitoring real-time traffic and adjusting resources, such as power levels or processing assignments. RL agents can learn optimal scaling strategies to minimize energy use without compromising performance, for instance, by adjusting transmission power during off-peak times or reallocating tasks to more efficient nodes. To ensure the sustainable deployment of AI, Green AI techniques—such as model pruning, quantization, and lightweight network use—are essential. These methods reduce the computational load and energy consumption of AI models themselves, thereby preserving the overall energy-saving benefits of AI-based optimization [126]. Together, these AI-driven methods support sustainable 6G operations by balancing high-performance requirements with reduced environmental impact.
GAI is playing a transformative role in optimizing wireless communication systems, particularly across XR, ISAC, SemCom, and metaverse applications. These models, including Generative Adversarial Networks (GANs), Generative Flow Networks (GFlowNets), diffusion models, and Neuro-Symbolic AI (NeSy AI), enhance system intelligence, adaptability, and efficiency by enabling predictive, context-aware decision-making. In XR environments operating at THz frequencies, GAI is used to enhance the personal experience while minimizing handover disruptions. A DL model, introduced in [127], imputes missing LoS and N-LoS sensing data using a non-autoregressive encoder-decoder transformer. This enriched, continuous sensing data supports proactive user association and RIS configuration via a hysteretic deep recurrent Q-network, effectively capturing long-term user behavior and environmental changes in highly dynamic scenarios.
In ISAC-enabled networks, GAI models optimize channel estimation and RIS configurations. GAN-based architectures, such as GAN-channel behavior detection and RIS-GAN, enable non-blind denoising and model-driven CSI estimation without prior knowledge of noise. These models enhance training stability and scalability, especially in multi-user environments. GFlowNets [128] further support ISAC by learning adaptive RIS configurations from low-dimensional channel charts that preserve spatial correlations between users and the environment, ensuring robust performance in real-time communication.
SemCom benefits significantly from GAI through deeper message understanding and enhanced reasoning capabilities. NeSy AI integrates neural and symbolic reasoning to enable end devices to interpret, select, and transmit semantically meaningful messages while minimizing distortion and accounting for KB inconsistencies. Trained via GFlowNet, this approach also supports causal inference and decision-making under uncertainty. The work in [129] introduces implicit, semantic-aware communication, in which receivers extract hidden meanings through generative imitation learning. A GAI-powered interpreter generates reasoning paths by iteratively selecting and linking concepts, guided by feedback from a discriminator network that evaluates semantic closeness without revealing sensitive information. Covert SemCom is explored, where Generative Diffusion Model (GDM)-based models ensure undetectable transmission by maximizing the detection error probability for potential eavesdroppers, while maintaining image quality and controlling energy consumption. A two-stage optimization process, involving condition-aware scheme evaluation and network generation, strikes a balance between semantic fidelity and covertness. For full-duplex D2D communication among mixed-reality users, study [130] utilizes a denoising diffusion probabilistic model and self-SL (SuperPoint and SuperGlue) to extract and match semantic interest points. A diffusion-based contract mechanism incentivizes cooperative data exchange between semantic information producers and receivers, optimizing power use and transmission quality. Additionally, study [131] presents a GDM and DRL-based power control method for UAVs, allocating transmission power based on the semantic importance of detected objects to maximize overall communication effectiveness.
In the context of metaverse applications, study [132] introduces an Integrated SemCom and AIGC (ISGC) framework to unify communication, content generation, and rendering. The system addresses bandwidth allocation challenges using a Markov Decision Process, where the state space includes parameters such as semantic entropy, channel gains, power levels, and computational resources. A combination of diffusion modeling and DRL refines control signals by injecting and removing noise, guiding bandwidth allocation across semantic, inference, and rendering modules. This integrated approach enhances content quality and responsiveness, addressing inefficiencies and low user engagement in metaverse environments. Overall, GAI models—spanning GANs, GFlowNets, NeSy AI, and diffusion frameworks—enable robust, intelligent, and semantically rich communication across diverse 6G scenarios. By learning from complex environments, adapting to dynamic user contexts, and optimizing RA, GAI paves the way for resilient and human-centric network architectures [133].
8.5 Self-Optimizing Transmitters and Receivers
Recent advancements in DL have shown that NNs can be trained to communicate over quasi-static wireless links more efficiently than traditional model-based systems. Unlike conventional approaches that require manual design of waveforms, constellations, and reference signals, DL models at both the transmitter and receiver can jointly learn these parameters through end-to-end training. This concept eliminates the need for explicit signal design by enabling the system to automatically discover the most effective communication strategies. Although this end-to-end learning approach may not yet be practical for highly dynamic or multi-user scenarios, future 6G systems are expected to incorporate learning capabilities directly into the field. This would allow communication parameters to be optimized based on real-time factors such as available spectrum, environmental conditions, hardware configurations, and performance requirements.
A key evolution in XG networks will be the integration of hardware constraints into the design of the air interface. Traditionally, the interface is developed based on general hardware assumptions, with the expectation that all devices will meet those specifications. However, in 6G, the system will instead adapt to each device’s specific capabilities. For instance, if a device has limited resolution in its analog-to-digital or digital-to-analog converters, the learning-based communication framework can account for these limitations to optimize signaling and overall performance accordingly [134].
Fig. 20 presents a highly technical end-to-end learning architecture designed for adaptive communication that explicitly accounts for both hardware impairments and the volatility of wireless channels. This system departs from traditional model-based approaches by using NNs as the core functional blocks in both the transmitter and receiver. The system is structured as a hardware-in-the-loop autoencoder. The input information bits are processed by the transmitter NN, which learns a robust mapping (encoding) that inherently incorporates dependencies on the local Analog Front-End and Digital-to-Analog Converter hardware. The resultant analog signal is transmitted over the wireless channel, which introduces time-varying fading and noise. At the receiver, the signal is affected by its own hardware elements, specifically the Analog-to-Digital Converter and Analog Front-End. The perturbed signal is then processed by the receiver NN, which learns the optimal decoding function to reconstruct the original bits. The NNs at both ends are jointly trained to minimize the end-to-end loss function (e.g., Bit Error Rate or Mean Squared Error), thereby adapting their parameters to maximize system performance. A key feature is the inclusion of a side channel, used for bootstrapping or initial training, allowing the architecture to quickly learn the initial hardware impairment and channel statistics upon initialization. Furthermore, a feedback channel dependency system is employed, enabling the receiver to transmit crucial performance metrics (e.g., instantaneous loss gradient or error signal) back to the transmitter. This closed-loop feedback facilitates continuous and online adaptation of both the transmitter’s encoding policy and the receiver’s decoding policy, ensuring the system remains efficient and effective despite continual changes in the wireless environment.

Figure 20: End-to-end learning architecture for adaptive communication under hardware and channel dependencies
8.6 Optimized Resource Control
RM is a fundamental component of the MAC layer in 6G networks, aiming to enhance spectrum and EE while suppressing intercell and intracell interference. Traditional approaches, which rely on convex and non-convex optimization techniques, often fall short in dynamic, time-varying network environments due to high computational demands and suboptimal performance. As a result, AI-driven methods are increasingly adopted for their ability to deliver fast, adaptive, and near-optimal solutions in real time. For user association, AI techniques such as fuzzy Q-learning, RL, and DRL have been employed to optimize the connections between users and APs, considering factors such as SNR, QoS requirements, and network load. Various studies propose distributed and SL-based approaches that enable intelligent, autonomous user-AP association, particularly in vehicular, UAV, and mmWave networks.
RA is another key area where AI plays a transformative role. DL, multitask learning, and transformer-based architecture are employed to optimize resource block assignments for different MA schemes, including OMA, NOMA, and RIS-assisted systems. RL models such as Q-learning, DQN, DDPG, and multi-agent DRL are applied to dynamically allocate bandwidth, power, and time slots based on historical experiences. Additionally, FL enables collaborative, privacy-preserving RM by allowing distributed agents to exchange model parameters without sharing raw data, thus reducing communication overhead. Power control, crucial for reducing interference and maximizing throughput, is also optimized using AI models. Decentralized Q-learning, game-theoretic RL, and actor–critic algorithms help in managing transmission power in heterogeneous and CR networks. DRL methods are instrumental in scenarios like RIS-NOMA and UAV-aided systems, enabling real-time adjustments based on environmental feedback. Furthermore, stochastic and USL models are explored to balance performance and computational efficiency, ensuring that power control strategies meet the diverse requirements of 6G applications. Overall, AI-powered techniques are proving vital in revolutionizing RM in the MAC layer, offering scalable, intelligent, and adaptive solutions that align with the complexity and dynamism of future 6G wireless networks [135].
In Fig. 21, an architecture of the AI-based RM paradigm to optimize MA protocols at the interface of the Physical Layer and the MAC Layer. The main component of the system is the MAC-RM module, which controls device access to resources (frequency, time, power, and code) using either the OMA or NOMA scheme. This module is functionally broken down into three dynamically regulated blocks, namely: user association, RA, and power control. The blocks are controlled by the decision block that uses dynamic AI algorithms (e.g., DRL). The AI’s outputs are driven by real-time information from the sensing block, which gathers heterogeneous data, including control messages, network status information (e.g., CSI), and benchmarking data (e.g., throughput or latency). This input data is strategically fed into a buffering process that conditionally stores it to optimize the flow and timing of data processed during AI training and inference. This novel closed-loop system, where smart choices are continuously adjusted by performance feedback, enables the system to autonomously optimize protocol-level choices, significantly increasing the efficiency, flexibility, and performance of MA systems in highly dynamic and complex networks.

Figure 21: AI-based resource management for optimized MA protocol design
8.7 Configuration Optimization
AI-driven techniques have become essential for optimizing medium access protocols in IEEE 802.11 networks by enabling the dynamic adjustment of key parameters, such as the Contention Window (CW), frame size, and transmission rate. These adjustments help minimize collisions, reduce idle time, and improve overall throughput and energy efficiency. To begin with, RL- and SL-based models have been widely applied to optimize CW and backoff settings in standard and extended MAC protocols. These approaches aim to strike a balance between collision avoidance and minimizing idle slots. For instance, algorithms such as Q-learning, DRL, and federated RL can adapt CW values based on network conditions, ensuring fairness and stability of throughput even in densely populated environments. Some models also incorporate traffic priorities or network density to determine optimal backoff strategies, while others employ fuzzy logic or SL to maintain fairness and meet QoS requirements. In terms of frame size configuration, AI solutions help dynamically determine the most efficient aggregation level to enhance data transmission efficiency and reduce overhead. ANN models, RF regression, and logistic regression algorithms are employed to predict optimal frame sizes by analyzing throughput patterns, frame error rate, and energy consumption. Cooperative multi-agent RL techniques further refine time-slot selection policies to improve channel access under varying conditions. These adaptive mechanisms enable flexible, efficient frame-size management that aligns with real-time network demands.
Lastly, selecting the transmission rate is crucial in optimizing link performance and throughput. AI models, including stochastic learning automata, Q-learning, and DNNs, are utilized to identify the optimal modulation and coding scheme levels based on feedback metrics such as SNR or acknowledgment rates. These strategies enable real-time rate adaptation to accommodate user mobility and fluctuating channel conditions. Federated RL and other DL models support faster convergence and scalable learning, making them suitable for complex and dynamic communication environments. In summary, integrating AI into MA protocol parameter tuning—covering CW/backoff control, frame size optimization, and transmission rate adjustment—enhances the adaptability, fairness, and performance of modern wireless networks [136].
Fig. 22 illustrates an advanced architecture for AI-based MAC parameter adjustment to optimize MA protocol performance at the interface between the Physical Layer and the MAC. The system dynamically controls essential MAC-layer mechanisms, including backoff procedures, request-to-send handshake, data transmission, acknowledgments, Network Allocation Vector, and fragmentation/aggregation. The core intelligence resides in the decision block, which employs an AI algorithm (e.g., DRL) to guide the MAC Parameter Adjustment module. This module uses the AI’s output to dynamically configure critical MAC parameters, such as the CW/backoff exponent, frame size, and transmission rate. These fine-grained adjustments are informed by a continuous feedback loop originating from sensing modules that capture and buffer real-time network state, including control messages, network messages (e.g., CSI), and benchmarking information (e.g., throughput, collision rates). By enabling intelligent, dynamic configuration of these parameters based on live network conditions, the architecture significantly enhances the protocol’s efficiency, responsiveness, and aggregate performance in MA systems across varied, dynamic communication scenarios.

Figure 22: AI-based MAC parameter adjustment for optimized MA protocol performance
AI-based techniques are increasingly used to enable intelligent switching between various MA schemes—such as OMA, NOMA, and random access—based on dynamic network requirements and QoS demands. Traditionally, numerical methods such as heuristic algorithms and analytical models have been employed to alternate between schemes, including TDMA, FDMA, CDMA, and CSMA. While these approaches addressed trade-offs in spectral and EE, they often involved high computational complexity and limited adaptability to real-time conditions. To overcome these limitations, recent research has applied DL and RL models for adaptive MA switching. These AI techniques can efficiently learn optimal access strategies that adapt to network conditions and user behavior. For instance, research employed ML to distinguish between competitive and non-competitive protocols, while another study proposed a NN-powered MAC framework to manage contention and scheduling periods. Similarly, the study applied DL and RL to optimize switching between reserved- and contention-based protocols in dynamic environments.
Other notable contributions include DNN-based solutions for switching between OFDMA and SCMA in multi-user systems, showing superior detection performance. One contribution introduced parallel DNNs to efficiently handle user association and MA selection, while a study developed a DDPG-based RL model for resource orchestration in NOMA-based industrial IoT systems. These advancements demonstrate the potential of AI to enable real-time, efficient, and adaptive access switching in XG wireless networks [137]. Fig. 23 illustrates an AI-empowered MA protocol optimization system for switching the MAC scheme. On the left, in the PL and MAC layers, the system considers three access schemes: OMA, NOMA, and RA. A Decision module handles the selection and maintenance of these schemes. A DEMUX/MUX structure is used to pass data through the traffic, spinning it in the correct direction according to the chosen scheme. On the right, dynamic MAC scheme switching is provided by an AI-based decision engine. Input to the AI algorithms includes control messages, network messages, and benchmarking data, which are stored in a buffer across multiple sensing modules. According to this analysis, the AI selects the optimal access scheme—OMA, NOMA, or RA—based on the current network conditions. The communication system uses an innovative, highly adaptive switching system that optimizes performance by selecting the best access strategy in real time, thereby enabling efficient resource use and robust connections.

Figure 23: An AI-driven MAC scheme for adaptive multiple access protocol optimization
8.9 AI-Powered Optimization Techniques: Benefits, Challenges, and Outlook
The high-performance requirements of 6G networks have necessitated the use of AI as a crucial means of refining essential communication layers and capabilities. In this comparative analysis, various AI-based optimization techniques are described and applied across the network slice, resource, energy, transmission-receiver, and medium access control domains. Table 9, provided below, summarizes the above-discussed optimization methods by listing the AI techniques used, their advantages, current limitations, and potential future developments. It provides an organized contrast that unveils the role that distinctive AI models, including ML, DL, RL, and GAI, play in enhancing network performance, flexibility, and genius. By comparing current approaches with AI solutions, the analysis highlights the adaptive and scalable nature of intelligent 6G systems and points to emerging research opportunities in sustainable AI, edge-based training, and resilient, explainable learning models.

9 Security and Privacy in AI-Empowered Multiple Access
The integration of AI into 6G networks introduces significant security and privacy challenges due to its autonomous and complex nature. These networks become appealing targets for cyber threats, necessitating advanced security solutions such as AI-powered IDS and strong encryption. The adoption of emerging technologies, such as distributed ledger technology, PL security, and distributed AI/ML, further increases the attack surface, requiring a thorough assessment of potential vulnerabilities. As highlighted by [138], addressing risks associated with these technologies is critical to ensuring end-to-end security. Additionally, the decentralized structure of 6G, combined with innovations such as THz communication and massive MIMO, necessitates the development of adaptive, intelligent security frameworks to safeguard user data and network integrity across all layers of the 6G architecture.
9.1 Enhanced Security Demands in 6G Networks
With the integration of the Internet of Everything into 6G, including applications in smart homes, healthcare, transportation, and smart grids, security becomes a critical concern. The adoption of THz communication and quantum technologies introduces new vulnerabilities, underscoring the need for high-level security. 6G is expected to support flexible, context-aware security measures, including PL security and quantum cryptography, to protect sensitive data such as financial and medical information. AI integration in 6G will play a vital role in defending against adversarial attacks, while quantum communication enhances confidentiality. As 6G evolves, it aims to strike a balance between user comfort and robust security by addressing key performance indicators, enabling technologies, and system-level challenges. The widespread connectivity and low-latency requirements of smart grids and similar use cases highlight the need for adaptive, intelligent security solutions. However, the growing autonomy of AI-powered devices also presents new risks, especially if malicious entities exploit these capabilities, underscoring the importance of resilient and proactive security frameworks in 6G networks [139].
9.2 AI-Based Security Optimization in 6G Networks
In 6G environments, AI-driven frameworks are being developed to optimize security configurations by balancing trade-offs between security strength and energy consumption. These frameworks dynamically adapt to various scenarios by considering device capabilities, energy sources (e.g., wired, wireless, renewable), and the nature of potential cyber threats. For instance, devices with limited energy may use lighter encryption to conserve power, while those with ample resources can implement more robust security measures. Two case studies illustrate this adaptive approach: smartphones and BSs. Smartphones, with variable energy levels, adjust encryption strength based on current power availability. In contrast, BSs—equipped with more consistent energy sources, such as renewables—handle a broader range of security attributes, including network conditions, user demands, and threat intelligence from nearby nodes. Due to the greater complexity and volume of data at the BS level, advanced modeling techniques, such as function approximation and Fourier basis functions, are employed to reduce dimensionality and improve real-time decision-making. This iterative optimization ensures that both smartphones and BSs can respond intelligently to evolving threats while maintaining operational efficiency [140].
9.3 Security Landscape of Future 6G Networks
The 6G network landscape is expected to connect a vast array of devices, including mobile phones, IoT sensors, AVs, and healthcare monitors, significantly increasing the risk of cyber threats. One of the primary concerns lies in the massive deployment of IoT devices, many of which lack sufficient security measures due to hardware limitations and cost constraints. These devices often lack support for strong encryption or authentication protocols, making them vulnerable to remote exploitation and large-scale Distributed Denial of Service (DDoS) attacks that can overload network resources. Another primary concern is the emergence of quantum computing, which threatens to compromise current encryption standards, such as RSA and ECC, potentially jeopardizing the confidentiality and integrity of transmitted data. Transitioning to quantum-resistant cryptographic methods, such as lattice-based or hash-based encryption, is essential but also poses challenges related to computational load and latency, particularly in real-time 6G applications. Additionally, the handling of vast amounts of personal and location-based data in applications such as healthcare monitoring, autonomous driving, and immersive technologies raises serious privacy and data integrity concerns. Unauthorized access or data manipulation could lead to identity theft, surveillance, or dangerous malfunctions in sensitive systems. As the threat landscape becomes more complex, deploying intelligent, AI-powered security frameworks will be essential to protect 6G networks from evolving cyber vulnerabilities and ensure secure, reliable communications [141].
9.4 Role of AI in Intrusion Detection Systems (IDS) for 6G Networks
AI plays a vital role in enhancing IDS for 6G networks, overcoming the limitations of traditional rule-based systems that often fail to detect new or complex cyber threats. ML techniques such as SVM and Decision Trees are commonly used to classify network traffic by identifying patterns from historical data, effectively distinguishing between regular and malicious activity. Ensemble models like RF and Gradient Boosting further improve detection accuracy by combining multiple classifiers, reducing false positives, and increasing reliability. DL methods offer even more advanced capabilities. CNNs can interpret network data as images to detect spatial anomalies. In contrast, RNNs, particularly LSTM networks, are adept at analyzing time-dependent traffic patterns, making them ideal for identifying threats such as botnets and low-rate DDoS attacks. Additionally, FL has emerged as a key solution for decentralized 6G environments where data privacy is a major concern. This approach enables individual devices to train IDS models locally and share only the model parameters with a central server, thereby preserving user privacy while facilitating collaborative learning. As a result, FL enables the scaling of AI-driven IDS across diverse devices—ranging from mobile phones to IoT sensors—without compromising security or data confidentiality [142].
9.5 AI-Based Anomaly Detection in 6G Networks
AD plays a crucial role in enhancing the security of 6G networks by identifying unusual patterns or deviations from normal network behavior that may indicate cyber threats or system faults. AI-based techniques offer powerful tools for this task. USL methods, such as K-means clustering and PCA, are effective in analyzing large, unlabeled datasets. These models detect anomalies by grouping typical data behaviors and flagging outliers, making them especially useful for identifying unknown or zero-day attacks. Autoencoders, a type of NN, are also valuable in detecting anomalies within high-dimensional network data. They learn standard patterns during training and produce high reconstruction errors when encountering abnormal inputs, signaling potential threats. Additionally, hybrid models that combine multiple AI techniques—such as unsupervised clustering with supervised classification—enhance accuracy and reduce false positives. These models ensure reliable, real-time monitoring of 6G traffic, enabling proactive responses to potential security incidents and supporting stable and secure network performance [143].
9.6 AI and Blockchain for Decentralized Security in 6G Networks
The integration of AI and blockchain offers a promising approach to building decentralized and resilient security frameworks for 6G networks. Blockchain’s immutable and distributed ledger capabilities ensure tamper-proof logging of network activities and secure authentication. At the same time, AI enhances threat detection by analyzing access log patterns and identifying anomalies in real time. Smart contracts further improve security by autonomously enforcing policies such as access control and anomaly-based restrictions, thereby reducing the need for manual oversight. Additionally, federated blockchain architectures enable secure collaboration across multiple domains without compromising data privacy, making them ideal for sensitive applications such as healthcare, finance, and smart cities. Despite these advancements, several challenges remain. AI systems themselves can be vulnerable to adversarial attacks, where small, manipulated inputs are used to deceive detection models, potentially allowing sophisticated threats to bypass security mechanisms.
Furthermore, the computational demands of real-time AI-based security, particularly in resource-constrained edge environments, can strain processing capabilities and delay response times, especially in latency-sensitive 6G applications like AVs and VR. To address these concerns, lightweight AI models, efficient processing techniques, and hardware acceleration are essential. Ultimately, while the combination of AI and blockchain enhances decentralization, issues of scalability, coordination, and privacy must be carefully managed. FL helps maintain data privacy without centralization, but it also introduces challenges in model synchronization and computational efficiency. A balanced, holistic approach is needed to ensure that the deployment of AI in 6G networks enhances security without compromising performance or ethical considerations [144].
9.7 Security and Privacy Challenges in the Metaverse
The Metaverse—a digital extension of the physical world—collects sensitive user data, including biometric and personal information, raising serious security and privacy concerns. Experts predict that this virtual space could become a significant target for cybercriminals due to the valuable data it handles. Users may face risks such as identity theft, data breaches, and exposure to harmful content, including misinformation and hate speech. Recent research has highlighted various dimensions of these challenges, categorizing risks into areas such as virtual assets, communication patterns, user data, and cultural contexts. Blockchain, Zero-Trust Architectures, and privacy-preserving frameworks have been proposed to mitigate these threats. Key ethical concerns involve digital ownership, inclusivity, algorithmic bias, and virtual harassment. The potential for virtual theft targeting digital currencies or assets mirrors real-world financial crimes. Social engineering also poses a serious threat, where avatars and interactive environments can manipulate users into revealing personal data. Additionally, malware attacks within the Metaverse could easily spread through seemingly harmless interactions with virtual content. Data breaches remain a significant risk, with attackers targeting personal and financial information. Lastly, while decentralization in the Metaverse offers users more autonomy, it also reduces oversight and accountability, increasing the risk of harmful or unregulated activity. Addressing these multifaceted challenges is essential to building a secure and ethical Metaverse ecosystem [145].
9.8 Network Security Challenges
Security remains a critical concern in the evolution of 6G wireless networks, particularly as Spatiotemporal Intelligence (STIN) technologies are deployed. Unlike previous generations, 6G requires a broader security approach that extends beyond traditional PL measures, necessitating comprehensive, integrated solutions. As a result, there is a pressing need to explore innovative security strategies that are both efficient and computationally lightweight. Some of the PL security mechanisms developed for 5G, such as LDPC-based codes, secure massive MIMO, and mmWave technologies, have the potential to be adapted for 6G applications. These techniques could be particularly valuable in emerging areas, such as ultra-massive MIMO systems and communications in the THz spectrum. In addition to enhancing the PL, ensuring robust network-level security requires a practical management framework for handling keys across various security zones. A distributed key management system that supports both unicast and multicast communications has been identified as a viable solution for secure STIN implementations. When effectively integrated, these PL and network-layer approaches can create a holistic, reliable security framework capable of protecting sensitive data in 6G environments [146].
9.9 Robust AI Defense Mechanisms for 6G
As 6G networks evolve, ensuring robust AI security becomes increasingly essential due to the persistent vulnerabilities of earlier generations (2G–5G) and the limitations of traditional security frameworks. To address these challenges, 6G requires an integrated, self-sustaining security system that leverages emerging technologies, such as AI, cloud computing, and big data. A promising solution is trusted endogenous security, a concept built on four foundational pillars: collaboration, intelligent proactive defense, trustworthiness, and privacy protection. Firstly, ubiquitous collaboration enhances the AI layer’s adaptability and resilience by promoting coordination across devices, layers, and network domains, allowing for more effective threat detection. Secondly, intelligent active defense transforms network protection from a reactive approach to a proactive one, enabling continuous learning and automated adaptation to evolving threats. Thirdly, trustworthiness in 6G emphasizes a secure and reliable ecosystem that supports access control, identity verification, and ensures system integrity, confidentiality, and resilience. Finally, privacy protection is critical as AI in 6G handles vast amounts of sensitive data. Combining distributed ML with existing privacy-preserving techniques helps reduce risks of data leakage and supports the development of a secure, intelligent data infrastructure [147].
9.10 GAI in XG Communication Security
In XG networks, GAI, particularly GANs, is increasingly being applied to enhance cybersecurity. GAN-based frameworks are employed for intelligent intrusion detection in IoT-supported edge computing networks. These networks face challenges such as limited processing capacity, data imbalance, and vulnerability to emerging attacks. Techniques such as fuzzy rough sets and convolutional GANs are employed to extract relevant features and generate synthetic data, enabling IDS to be trained more effectively. Autoencoders and Boundary Equilibrium GAN help manage data dimensionality and balance datasets, improving detection accuracy. For UAV-assisted IoT networks, conditional GANs paired with LSTM networks improve AD by generating realistic sequential data. A blockchain-enabled FL framework supports privacy-preserving, decentralized model training, with trust scores guiding collaboration and aggregation across UAVs. In Industrial Wireless Sensor Networks (IWSNs), trust management is crucial given the environment’s dynamic nature. GANs, along with Interval Type-2 Fuzzy Logic, enable the accurate evaluation, classification, and redemption of trust. The system classifies nodes based on historical trust patterns and reconstructs trust vectors to detect malicious behavior, even correcting false positives through GAN-based redemption mechanisms. The research collectively demonstrates that GAI-based models enhance XG network security by improving IDS capabilities, managing trust in distributed systems, and supporting privacy-preserving decentralized learning [148].
GAI enables a proactive approach to XG network security by simulating diverse cyberattack scenarios using synthetic data. This allows IDS models to learn from a wide range of threats before real incidents occur. Techniques such as Fuzzy Knowledge Distance facilitate the selection of relevant features, thereby reducing computational overhead while maintaining detection accuracy. Autoencoders facilitate compressed data representation, highlighting significant features and filtering noise, thereby enabling more effective AD. LSTMs are adept at identifying temporal patterns in UAV data, crucial for uncovering complex threats. FL ensures a collaborative model training process without exposing raw data, and blockchain enhances transparency and trust within the system. Type-2 fuzzy logic further supports real-time trust assessment in dynamic environments, such as IWSNs, helping to distinguish between faulty and malicious nodes. Collectively, these GAI-driven solutions create a scalable, secure, and intelligent framework for future wireless network protection [149].
9.11 Comparison of AI-Driven Security Mechanisms in 6G Networks
With the proliferation of intelligent and autonomous technologies in 6G/XG networks, securing the infrastructure across diverse layers has become increasingly complex. Various AI and GAI-based solutions have been proposed to address specific challenges, ranging from intrusion detection and trust management to AD, secure data sharing, and decentralized access control. This subsection presents a comparative analysis of state-of-the-art AI-enabled security frameworks across multiple domains, including mobile networks, IoT, UAV-assisted systems, IWSNs, and Metaverse. Each method is evaluated based on its core security challenges, the technologies applied, its limitations, and its potential for future research. The goal is to identify strengths and gaps across different approaches to inform the design of more robust, adaptive, and scalable security architectures for future 6G deployments. Table 10 summarizes the reviewed solutions, offering a side-by-side comparison of how current AI-driven techniques address the multifaceted nature of cybersecurity in XG networks.

10 Open Challenges, Lessons Learned, and Recent Trends
The rapid evolution of the wireless communication system, particularly the transition to 6G, has revealed essential information and emerging trends that define the future of intelligent networks. Experiences underpinning the deployment of AI-driven architecture, edge intelligence, FL, and advanced sensing systems have starkly demonstrated the potential and inherent considerations involved in establishing an autonomous, high-performing communication setting. Whether it is increasing spectrum efficiency or enabling real-time decision-making and predictive analytics, AI-based 6G has proven transformative across industries. At the same time, emerging trends indicate the growing popularity of privacy-oriented models, energy-saving structures, and interdisciplinary approaches that unite communication, computing, and cognition. The section draws the major conclusions and presents the most notable directions of the current and future research and development in 6G.
10.1 Open Challenges and Future Research Directions
AI is poised to be a cornerstone of 6G network development, driving advancements in intelligence, efficiency, sustainability, and user-centric services. While AI has demonstrated success in enhancing traditional MA techniques like OMA and SDMA, its application to more advanced methods, such as RSMA, CDMA, Reconfigurable MA, and Universal MA, is still in its early stages. Universal MA, which integrates multiple MA domains simultaneously, presents a complex but promising area for AI-driven optimization due to its multidimensional structure. In FL, selecting the optimal MA scheme is challenging because pre-implementation data, such as CSI and model gradients, are unavailable. Unified MA frameworks such as RSMA and universal MA could ease this burden by enabling seamless integration with FL while minimizing decision complexity.
Similarly, Over-the-Air Computation offers new directions for analog-based uplink communication, demanding innovative MA strategies [150]. In ISAC, AI’s potential is being explored to resolve the inherent conflicts between sensing and communication tasks, especially in large-scale MIMO networks. Coordinated ISAC architectures, such as cell-free systems, require advanced methods for interference management and synchronization maintenance. Although RSMA has shown benefits for ISAC, the possibilities of universal MA-assisted ISAC remain largely untapped. Likewise, AI-assisted localization presents opportunities, but existing models often assume static users and neglect the impact of mobility and synchronization errors. Future efforts must focus on achieving real-time accuracy through universal MA-based localization systems that overcome the delays associated with traditional SIC techniques.
AI also plays a critical role in managing mMTC, which involves ultra-dense networks of IoT devices. AI enables dynamic D2D communication, adaptive scheduling, spectrum optimization, fault detection, and EE protocols. To realize these capabilities, challenges related to scalability, interference, latency, heterogeneity, and energy consumption must be addressed. Techniques such as decentralized AI, DRL, multi-agent systems, and lightweight models, along with FL and AD, are essential for secure and reliable mMTC deployments. Digital twin technology, enhanced by AI, provides real-time virtual representations of physical 6G networks for planning, optimization, and predictive maintenance. Applications include network design, failure prediction, real-time load balancing, stress testing, and IoT deployment planning. However, this approach faces several challenges, including high computational demands, real-time data synchronization, model accuracy, scalability, and cybersecurity concerns. Addressing these requirements requires advances in distributed computing, edge-cloud integration, AI abstraction techniques, and the development of standardized, modular frameworks to ensure secure, interoperable digital twin platforms.
Sustainability is another major priority for 6G. AI contributes to energy-aware solutions in data centers, such as dynamic resource scaling, predictive power management, and adaptive cooling. Green AI techniques—including model pruning, quantization, and EE training—help minimize the carbon footprint of AI models. Future research must address the balance between performance and energy savings, integrate renewable energy sources, and enable scalable, low-power AI operations that can run on diverse network infrastructures, including legacy systems. Human-machine interaction and the Tactile Internet represent groundbreaking use cases for 6G, enabled by AI-driven, ultra-low-latency, context-aware communication. Applications such as remote surgery, immersive AR/VR, haptic feedback, and smart assistive technologies require real-time responsiveness and environmental awareness. Key challenges include latency sensitivity, privacy, energy limitations for wearable devices, and adaptability in dynamic environments. Edge AI, multimodal learning, federated privacy-preserving methods, and URLLC are necessary to support these systems effectively.
Despite notable progress, open issues persist in the design of AI-empowered MA protocols. Spectrum and interference management, especially in high-frequency bands (mmWave and THz), demand advanced AI-driven sensing and sharing techniques. MA protocols must also accommodate diverse traffic types, user mobility, and device intelligence, all while maintaining low power consumption and high scalability. Security and privacy concerns are heightened by AI integration and require robust protection at both PL and MAC layers. Compatibility with evolving hardware, including satellites, UAVs, RIS, and LEO technologies, adds additional layers of complexity to deployment and optimization. Power consumption remains a concern, particularly in low-power terminals that require AI functionalities. Thus, future AI-enabled MA protocols must be energy-efficient, lightweight, and scalable to dynamic environments.
Future research should explore integrating AI with enabling 6G technologies such as Satellite-Air-Ground Integrated Networks (SAGIN), WPT, RIS, MEC, ISAC, HBF, and THz communications. These integrations can help address challenges related to spectrum, complexity, power consumption, compatibility, and security. Furthermore, emerging AI paradigms like GAI and LLMs offer scalable, adaptive, and generalizable solutions for network optimization, security, and RM. The path forward calls for ongoing innovation in AI models to improve transparency, security, and EE across 6G systems. Explainable AI (XAI) will be crucial for establishing trust in critical sectors, such as healthcare and finance, which necessitate lightweight, domain-specific, and privacy-preserving solutions. AI-driven security frameworks should detect and mitigate threats in real time, while privacy-focused methods like FL will protect user data. Energy-efficient AI tailored for edge computing and real-time decision-making, along with techniques for processing multi-sensory data, will enable immersive and intelligent user experiences. Together, these research directions highlight AI’s transformative role in shaping the architecture, operation, and ethical deployment of 6G networks, ensuring a secure, adaptive, and sustainable future for global connectivity.
The evolution of 6G wireless communication is expected to follow the decade-long cycle of cellular advancements, with widespread adoption projected for the 2030s. 6G will not only optimize and extend the capabilities introduced in 5G—such as URLLC, mMTC, and industrial IoT—but also enable new, transformative applications that are yet to be fully imagined. A prominent trend is the expansion of mobile connectivity into more diverse verticals, with deeper integration across industrial, consumer, and public sectors. In parallel, the rapid progress of AI and ML is driving a shift toward AI-native network designs, where intelligent algorithms enhance system adaptability, operational efficiency, and user-centric experiences.
To meet the escalating demands for data throughput and real-time responsiveness, 6G is expected to exploit higher-frequency bands—including sub-THz and THz bands—to unlock wider bandwidths and enable more precise localization and sensing capabilities. This paves the way for networks that not only transmit data but also perceive and interpret physical environments, enabling new forms of hu-man-technology interaction. Recent research highlights several transformative trends that are likely to define the 6G paradigm. These include AI/ML-based air interface design and optimization; expansion into new spectrum regions coupled with cognitive spectrum sharing techniques; integration of sensing and positioning within the network infrastructure; achievement of extreme performance targets in latency, reliability, and capacity; and the adoption of novel network architectures such as RAN-core convergence and sub-network frameworks.
Moreover, the growing complexity and heterogeneity of 6G use cases necessitate a rethinking of security and privacy mechanisms. In response, robust, decentralized solutions are being investigated to safeguard user data and maintain trust in intelligent, autonomous systems. Collectively, these advancements signal a shift toward a more modular, flexible, and platform-based network architecture, where the decoupling of the air interface from the core network enables scalable, efficient, and application-specific innovations.
The process of advancing to 6G networks has been a valuable experience, allowing me to learn both technical applications and strategic research methods. The first lesson learned is that traditional rule-based communication systems cannot meet the requirements of ultra-dense, low-latency future networks and require a high degree of intelligence. Integrating AI, ML, and data-driven methods has been a key factor in overcoming the challenges of dynamic spectrum management, RA, and real-time decision-making. Besides, multidisciplinary cooperation between wireless communications, computing, robotics, and data science has become a crucial source of innovation. Initial efforts at experimentation and deployment have also highlighted the need to strike a balance between optimizing performance and respecting data privacy, energy consumption, and system interpretability. The insights serve an essential role in developing 6G architectures that are resilient, adaptive, and human-centric.
Lesson 1: Toward Intelligent and Secure 6G Networks
The integration of AI into 6G networks offers transformative potential but poses notable challenges. Key issues include the need for massive, diverse datasets, which raise concerns about privacy and data security. FL and privacy-preserving techniques offer promising solutions but require further refinement for practical deployment. Another concern is algorithmic bias, which can lead to unfair or discriminatory outcomes. Addressing this requires transparent, fair, and continuously monitored AI systems. Explainability is also critical—many AI models operate as black boxes, eroding trust and accountability, especially in safety-critical applications such as healthcare and AVs. AI systems also impose high computational complexity, demanding efficient processing and low-latency operations, particularly in edge computing environments. Moreover, as AI increases automation, it introduces new security vulnerabilities at multiple layers of the 6G architecture, necessitating AI-powered security frameworks, encryption, and integration with secure technologies such as blockchain. AI enables a wide range of innovative 6G applications, including smart cities, UAV-based network management, autonomous transportation, and immersive remote healthcare. However, these applications pose challenges in intelligence, EE, and privacy, especially in UAV swarms and real-time operations. Ethically, the deployment of AI in 6G must consider transparency, fairness, accountability, and the potential for job displacement. Strategies such as XAI, workforce retraining, and privacy-focused design are crucial for mitigating negative impacts. Looking ahead, AI is expected to be central in achieving 6G’s ambitious goals—ultra-high data rates, low latency, massive connectivity, and EE. To fulfill this vision, sustainable, secure, scalable, and interpretable AI solutions must be prioritized in future research [151].
Lesson 2: Key Lessons from AI-Driven Spectrum Management Research
This review highlights several key insights into the current state and future direction of AI-based SM in advanced wireless networks. One of the main observations is the limited focus on applying AI to beam alignment and tracking, which are crucial for maintaining reliable communication in high-frequency 6G environments. Despite the potential of AI to improve precision and responsiveness, these areas remain underexplored. Additionally, the widespread use of DL and RL methods in SM poses challenges due to their high computational requirements, making real-time deployment difficult in resource-constrained and latency-sensitive settings. To address this, future research should focus on developing lightweight models, efficient training techniques, and distributed computing solutions, such as edge computing. FL is gaining attention as a privacy-preserving approach for collaborative SM, but issues related to data variability, communication costs, and secure model updates remain. Efforts toward standardizing these frameworks can help ensure broader adoption. Another key issue is the lack of transparency in many AI models used for decision-making.
The complexity of current methods often makes it difficult to understand how decisions are made, hindering trust and accountability. The integration of XAI techniques is therefore essential for improving interpretability and reliability. The potential use of LLMs in SM is also noted as a promising but underexplored area. These models could enhance decision-making by interpreting complex data and managing network resources more intelligently. However, as AI becomes increasingly integrated into network operations, concerns about security and privacy grow. There is a growing need for robust security frameworks to protect data and ensure safe operation in dynamic network environments. Finally, the lack of standard testbeds and benchmarks limits the ability to evaluate and compare different AI approaches fairly. Developing standardized platforms and promoting transparent sharing of models and results would support more consistent progress in the field. Overall, these lessons underscore the importance of ongoing innovation, interdisciplinary collaboration, and practical solutions to ensure that AI can be effectively and safely deployed in XG wireless networks [152].
Lesson 3: Overcoming Core Technical Hurdles in 6G Evolution
Realizing the full potential of 6G demands overcoming numerous technical hurdles spanning the PL to network architecture. The use of THz frequencies offers unprecedented data rates but faces significant obstacles, including high propagation losses, atmospheric absorption, narrow coverage, and rapid signal fluctuations. These issues necessitate new transceiver designs, adaptive beamforming, and regulatory frameworks beyond 0.3 THz. Similarly, the expansion of carrier bandwidths introduces design complexity, as managing wide GHz ranges challenges the linearity and power efficiency of RF front-end circuits, making carrier aggregation and calibration more demanding. Power supply also poses a significant bottleneck, especially for mobile devices and IoT systems, calling for innovations in EH, efficient precoding, and low-power processing.
Additionally, 6G envisions the dynamic integration of heterogeneous networks, unlike the static combinations in 5G, which require real-time, intelligent coordination across diverse communication standards. Tactile Internet applications—requiring ultra-low latency and real-time control—face open research problems in coding, network slicing, and security. The evolving network introduces security vulnerabilities spanning physical and network layers, underscoring the need for integrated, lightweight, and robust security solutions. Moreover, with the rise of 3D networking, new challenges in mobility, routing, and interference emerge, particularly in aerial and vertical domains. Device compatibility is another concern, as supporting AI, XR, and time-bits-per-second throughput may outpace the capabilities of current devices, demanding cost-effective upgrades and backward compatibility. Finally, SM remains critical due to interference risks and spectrum scarcity, requiring advanced sharing, scheduling, and interference cancellation strategies. In summary, bridging the 6G vision with its technical feasibility requires a coordinated research effort across high-frequency design, EE architecture, spectrum policy, secure protocols, and intelligent network orchestration. These lessons form a foundation for guiding innovation and standardization in future wireless systems [153].
Lessons 4: Non-Technical Insights for a Sustainable and Inclusive 6G Future
The evolution of 6G requires more than just technological advancements—it must also consider fundamental scientific, industrial, societal, and commercial aspects. Key lessons include the need for stronger integration with basic sciences, such as mathematics and physics, as current communication theories may not fully address emerging complexities. Additionally, technological progress must align with the capabilities of upstream industries to ensure feasibility and deployment. A demand-driven roadmap involving end users is vital for real-world applicability, while commercialization strategies—such as cost models and backward compatibility—must be prioritized early. Public health, psychological acceptance, and concerns over EW exposure and privacy also require attention to ensure societal trust. Finally, effective global connectivity will depend on addressing digital divides, ensuring content relevance, and overcoming accessibility challenges in underserved regions. These insights highlight the importance of a multidisciplinary, human-centric approach in shaping a sustainable and inclusive 6G future.
Lesson 5: Integration of Generative AI in XG Wireless Networks
The integration of GAI into XG wireless networks offers promising opportunities to optimize performance, enhance flexibility, and enable intelligent adaptation across diverse network scenarios. Models such as GANs, GDMs, and GFlowNets are particularly effective in simulating complex wireless environments and generating high-quality synthetic data. These capabilities enhance a variety of tasks, including RA, CSI, and AD. When combined with RL, GAI further strengthens decision-making processes for managing complex, high-dimensional optimization problems, such as those found in NTNs. Despite their potential, a significant limitation of current GAI models is their limited ability to generalize beyond the training data, hindering their effectiveness in unseen or dynamic network conditions. This challenge, known as Domain Generalization (DG), becomes critical in real-world wireless environments where distributional shifts between training and deployment domains are common. To overcome this, future research should focus on hybrid learning models that integrate both generative and discriminative approaches to extract domain-invariant features. Additionally, applying regularization techniques, such as Invariant Risk Minimization (IRM), can help ensure that models learn features that are stable across different environments. GAI models can also be leveraged to generate synthetic data representing multiple domains. When combined with IRM, they can be trained to identify features that generalize well, even under adversarial or unfamiliar conditions. This integration improves both the robustness and adaptability of models in real-world deployments. Ultimately, combining GAI with DG and IRM approaches enables the development of more scalable, resilient, and intelligent XG wireless systems that maintain high performance across a wide range of dynamic, heterogeneous conditions.
Lesson 6: THz Propagation to AI Integration in 6G Networks
The path toward realizing 6G wireless communication systems is marked by a variety of technical, architectural, and economic challenges. One of the foremost issues is the propagation limitations of THz frequencies. While they enable ultra-high data rates, these frequencies suffer from high path loss, limited antenna gain, and atmospheric absorption, demanding advanced transceiver and antenna designs. Additionally, the lack of standardized models for sub-mm-wave propagation complicates reliable channel modeling under dynamic environmental conditions. Hardware heterogeneity presents another significant hurdle. The diverse range of frequencies, architectures, and communication protocols expected in 6G—especially with the inclusion of massive MIMO and AI-driven technologies—will require substantial upgrades to existing infrastructure. Autonomous wireless systems, including AI-enabled Industry 4.0 environments, UAVs, and CANs, require integrating multiple subsystems, which makes system design and interoperability complex.
Further complications arise from the need to manage massive backhaul capacities for geographically dispersed IoT and access networks, requiring novel high-speed backhaul solutions. Efficient spectrum utilization, interference management, and intelligent beam control in THz bands are also critical technical demands. Moreover, PL security remains a pressing concern, especially for privacy-sensitive, human-centric applications, necessitating new frameworks beyond those of 5G. On the economic front, the deployment of 6G will entail significant infrastructure investment. However, with careful planning and the strategic reuse of 5G resources—such as spectrum, hardware, and network infrastructure—cost-effective upgrades are possible. Overall, these challenges highlight the importance of cross-disciplinary research and innovation in advanced signal processing, AI integration, spectrum sharing, and economic planning for a successful transition to 6G networks.
Lesson 7: Building the Foundations for a Trustworthy and Scalable Metaverse
The convergence of AI, beyond 5G/6G networks, and the Metaverse presents unprecedented opportunities but introduces a multitude of complex challenges. From an AI perspective, integrating with emerging technologies such as Industry 5.0, aerial-ground vehicles, and LLMs poses scalability, interoperability, and EE concerns. Moreover, delivering unique, immersive experiences and ensuring democratization in digital spaces demands bias-free AI systems, real-time emotion mapping, and secure content governance. Intellectual property protection becomes increasingly challenging due to anonymity, ease of replication, and decentralized environments, necessitating collaboration among AI, legal, and enforcement mechanisms. On the beyond 5G/6G front, the Metaverse requires a shift toward URLLC and adaptive mobile networks. Challenges include managing highly dynamic and complex environments, implementing zero-touch network automation, and ensuring seamless connectivity across SAGIN. These networks must support high data fidelity and uninterrupted, immersive services, which call for novel architectures and reliability metrics. Applications such as avatar interaction, robotic surgery, and telepresence further underscore the need for robust tactile Internet capabilities.
When AI and 6G are integrated to support Metaverse applications, additional issues arise, including data privacy in edge AI, computational load on resource-constrained devices, and the need for intelligent, self-adaptive automation. Future directions include leveraging FL, self-SL AI, and distributed computing across the edge-cloud continuum. Furthermore, EE becomes a critical key performance indicator, as both AI execution and 6G infrastructure incur high energy demands. Solutions such as adaptive compression, edge offloading, and EH are crucial to sustainable operation. Ultimately, for the Metaverse to be socially and environmentally sustainable, concerns such as climate impact resulting from high energy consumption (e.g., from blockchain and hyperscale data centers), the widening digital divide, mental health effects, and complex privacy risks must be addressed. Achieving sustainability requires a governance framework that ensures equitable access, monitors mental well-being, and safeguards data across dimensions. Collectively, these lessons underscore the pressing need for interdisciplinary collaboration and innovation to ensure the Metaverse evolves into a secure, inclusive, and sustainable digital frontier.
Lessons 8: Multiple Access for NTN-Assisted IoT in 6G
The integration of NTNs with IoT in 6G ecosystems presents a wide range of technical challenges and emerging research opportunities. One key lesson is the need for standardized protocols and regulations tailored to the unique requirements of NTN-enabled IoT systems. Managing highly heterogeneous networks—spanning diverse device types, service demands, and communication standards—requires advanced algorithms for efficient orchestration. EE remains a central concern, especially in battery-constrained IoT devices operating within NTN environments. This highlights the need for innovative MA schemes and lightweight communication protocols. Furthermore, improving the spectral and operational efficiency of existing and emerging access techniques is essential to meet the performance expectations of 6G. Security and privacy challenges are magnified in dense IoT networks that exchange massive amounts of data, requiring robust and scalable security frameworks. Equally important is the support for ULLC, especially for time-sensitive applications such as XR, autonomous systems, and telemedicine.
AI and ML will play a transformative role in enabling intelligent, autonomous network operations. However, this integration introduces complexities that must be addressed through novel architecture and adaptive learning strategies. Environmental sustainability is another critical consideration, prompting research into green communication solutions and energy-aware protocol designs. Additionally, the role of edge computing in enhancing the processing capabilities of NTN-enabled IoT devices cannot be overlooked. Research into distributed computing models, efficient data offloading, and edge-access integration will be key. Finally, cross-layer optimization that considers the PL, link, and network layers holistically will be essential to achieve seamless performance across diverse application scenarios. Collectively, these insights underscore the importance of interdisciplinary research and innovation in realizing secure, efficient, and sustainable NTN-assisted IoT networks in the 6G era.
Lesson 9: Enabling Cognitive, Green, and Secure 6G Networks
A comprehensive examination of emerging research themes reveals that AI will be central to the success of 6G networks; however, its integration presents significant technical and strategic challenges. Adaptive learning mechanisms, such as DRL and transfer learning, are expected to form the backbone of intelligent network management. These approaches must evolve to support heterogeneous devices, dynamic traffic patterns, and real-time processing while ensuring scalability and resilience. Designing flexible, AI-native network architectures that blend cloud and edge computing is also critical. FL emerges as a solution to address data privacy and distributed intelligence. However, deploying AI at the edge introduces new challenges in resource-constrained environments, demanding lightweight models and efficient data processing pipelines.
Sustainability is a growing concern in 6G. Green AI solutions, including edge hardware and predictive energy optimization algorithms, are essential for reducing environmental impact. AI can also facilitate the integration of renewable energy sources and adaptive power management strategies, contributing to greener network operations. ULLC, a hallmark of 6G, can be enhanced by real-time AI decision-making at the network edge. However, designing AI systems that can function under uncertainty and incomplete data remains a research gap. Additionally, effective orchestration of computational resources using AI-driven tools is necessary to maintain latency guarantees. Cybersecurity is another key area where AI can provide real-time threat detection, automated response mechanisms, and adaptive defense strategies. Future efforts must address the evolving nature of cyber threats and create self-updating, context-aware AI security frameworks. AI is also expected to drive innovations in emerging 6G technologies, including RIS, ISAC, Open RAN, and NTNs. These areas benefit from AI-enhanced beamforming, RA, and environmental sensing, but require tailored models that address constraints on latency, dynamic topologies, and high mobility.
The vision of ubiquitous computing in 6G relies on intelligent resource distribution across terrestrial BSs, UAVs, and LEO satellites. This demands efficient, dynamic, and hierarchical RA strategies, where DRL and other advanced algorithms will play a crucial role. Finally, integrating LLMs into 6G introduces a new cognitive layer. LLMs will enable semantic communication, intent-driven automation, and cross-modal optimization. However, key challenges remain in ensuring model efficiency, privacy-preserving training (e.g., via FL), and robust real-time inference. Hybrid quantum-classical systems, sparsity-aware modeling, and DT-based simulations may open new research frontiers. Collectively, these lessons emphasize the importance of interdisciplinary research—encompassing AI, networking, hardware, and sustainability—to develop scalable, secure, and intelligent 6G infrastructures that can support XG applications and services.
Lesson 10: AI-Powered 6G Multiple Access
The integration of AI into MA protocol design and optimization for 6G systems remains at a nascent yet up-and-coming stage. Emerging use cases across various domains, including SAGIN, WPT, RIS, MEC, ISAC, HBF, and THz technologies, illustrate AI’s potential to enhance performance and address critical challenges. AI can facilitate power efficiency when combined with WPT or RIS, reduce system complexity through MEC and ISAC integration, and manage spectrum and interference more effectively by synergizing with HBF, THz, and ISAC technologies. Additionally, AI integration with WPT, SAGIN, RIS, and ISAC helps address issues related to system compatibility, scalability, and adaptability. Meanwhile, combinations of HBF and RIS can enhance network security and privacy. Moreover, the emergence of advanced AI paradigms—such as GAI and LLMs—signals a paradigm shift in wireless communications. GAI excels in learning complex data distributions for optimization and security, while LLMs enable continuous, knowledge-driven management of dynamic networking tasks. The strength of these innovations lies in their scalability, accuracy, and generalization capability, offering promising avenues for overcoming future MA challenges in 6G systems. Collectively, these insights highlight the transformative impact of evolving AI techniques on shaping intelligent, adaptive, and secure MA solutions for XG networks.
11 Conclusion and Future Directions
This paper presents a comprehensive overview of the transformative potential of 6G networks and the pivotal role of AI in shaping their architecture, management, and performance. The study explored how AI-empowered multiple access techniques, intelligent protocol design, and adaptive optimization strategies can address the complex requirements of next-generation communication systems. These requirements range from ultra-low-latency communications, energy efficiency, and spectrum management to enhanced security and the enabling of immersive applications such as the Metaverse, digital twins, and remote surgeries. The integration of key technologies such as THz communication, reconfigurable intelligent surfaces, quantum communication, and NTNs underscores 6G’s ambition to deliver ubiquitous, innovative, and sustainable connectivity. AI has emerged not only as a tool for performance optimization but also as a foundational enabler of self-organizing, context-aware, and resilient networks capable of supporting massive numbers of IoT devices, the tactile internet, and immersive services. Despite significant advancements, key challenges remain, including the need for standardized, unified multiple-access frameworks, scalable AI architecture, secure protocol designs, and ethical considerations surrounding AI deployment. Future research should focus on cross-layer optimization, federated learning, low-complexity AI models, and unified multiple-access strategies to unlock the full potential of AI-driven 6G networks. Overall, the convergence of AI and 6G marks a paradigm shift toward human-centric, adaptive, and intelligent wireless ecosystems, laying the groundwork for a seamless, efficient, and secure digital future.
Acknowledgement: Not applicable.
Funding Statement: The authors received no specific funding for this study.
Author Contributions: The manuscript was written through the contributions of all authors. Conceptualization, Agbotiname Lucky Imoize; methodology, Kinzah Noor, Agbotiname Lucky Imoize, and Michael Adedosu Adelabu; writing—original draft preparation, Kinzah Noor, and Agbotiname Lucky Imoize; writing—review and editing, Kinzah Noor, Agbotiname Lucky Imoize, Michael Adedosu Adelabu, and Cheng-Chi Lee; supervision, Agbotiname Lucky Imoize; project administration, Agbotiname Lucky Imoize, Michael Adedosu Adelabu, and Cheng-Chi Lee; funding acquisition, Agbotiname Lucky Imoize. All authors reviewed the results and approved the final version of the manuscript.
Availability of Data and Materials: All data generated or analyzed during this study are included in this published article.
Ethics Approval: Not applicable.
Conflicts of Interest: The authors declare no conflicts of interest to report regarding the present study.
List of Abbreviations and Meanings
| 3GPP | 3rd Generation Partnership Project |
| 5G | Fifth Generation |
| 6G | Sixth Generation |
| AD | Anomaly Detection |
| AI | Artificial Intelligence |
| AIGC | Artificial Intelligence-Generated Content |
| AmBC | Ambient Backscatter Communication |
| ANN | Artificial Neural Network |
| AP | Access Point |
| AVs | Autonomous Vehicles |
| BCJR | Bahl, Cocke, Jelinek, and Raviv |
| BP | Belief Propagation |
| BS | Base Stations |
| CDMA | Code Division Multiple Access |
| CNN | Convolutional Neural Networks |
| CoMP | Coordinated Multi-Point |
| CR | Cognitive Radio |
| CSI | Channel State Information |
| CSMA | Carrier Sense Multiple Access |
| CW | Contention Window |
| D2D | Device-to-Device |
| DCF | Distributed Coordination Function |
| DDoS | Distributed Denial of Service |
| DDPG | Deep Deterministic Policy Gradient |
| DFT-s-OFDM | discrete Fourier transform spread orthogonal frequency-division multiplexing |
| DG | Domain Generalization |
| DL | Deep Learning |
| DNN | Deep Neural Network |
| DQN | Deep Q-Networks |
| DRL | Deep Reinforcement Learning |
| DRA | Dynamic Resource Allocation |
| DSA | Dynamic Spectrum Access |
| DSM | Dynamic Spectrum Management |
| ECCs | Error-Correcting Codes |
| EE | Energy Efficiency |
| EH | Energy Harvesting |
| EW | Electromagnetic Waves |
| FAMA | Fluid Antenna Multiple Access |
| FDD | Frequency-Division Duplex |
| FDMA | Frequency Division Multiple Access |
| FL | Federated Learning |
| FSO | Free Space Optics |
| GA | Genetic Algorithms |
| GAI | Generative AI |
| GANs | Generative Adversarial Networks |
| GBM | Gradient Boosting Machines |
| GDM | Generative Diffusion Model |
| GFlowNets | Generative Flow Networks |
| GRS | Generalized RS |
| HBF | Hybrid Beamforming |
| HR | Holographic Radio |
| HRS | Hierarchical RS |
| IBFD | In-Band Full-Duplex |
| IDS | Intrusion Detection System |
| IM | Index Modulation |
| IoT | Internet of Things |
| IRM | Invariant Risk Minimization |
| ISAC | Integrated Sensing and Communication |
| IWSNs | Industrial Wireless Sensor Networks |
| KBs | Knowledge Bases |
| KNN | K-Nearest Neighbors |
| LEO | Low-Earth Orbit |
| LDPC | Low-Density Parity Check |
| LLM | Large Language Model |
| LoS | Line-of-Sight |
| LSTMs | Long Short-Term Memory networks |
| LTE | Long Term Evolution |
| MA | Multiple Access |
| MAC | Medium Access Control |
| MARL | Multi-Agent Reinforcement Learning |
| MIoT | Massive IoT |
| MEC | Mobile Edge Computing |
| MIMO | Multiple Input Multiple Output |
| mMTC | Massive machine-type communication |
| mmWave | Millimeter-wave |
| NeSy AI | Neuro-Symbolic AI |
| NET4AI | NETwork for AI |
| NLoS | Non-Line-of-Sight |
| NLP | Natural Language Processing |
| NNs | Neural Networks |
| NOMA | Non-Orthogonal Multiple Access |
| NR | New Radio |
| NTNs | Non-Terrestrial Networks |
| OAM | Orbital Angular Momentum |
| OMA | Orthogonal Multiple Access |
| OWC | Optical Wireless Communications |
| PAPR | Peak-to-average power ratio |
| PCA | Principal Component Analysis |
| PL | Physical Layer |
| PSO | Particle Swarm Optimization |
| QAM | Quadrature Amplitude Modulation |
| QoS | Quality of Service |
| RA | Resource Allocation |
| RANs | Radio Access Networks |
| RF | Radio Frequency |
| RIS | Reconfigurable Intelligent Surfaces |
| SIC | Successive Interference Cancellation |
| SE | Spectral Efficiency |
| SS | Spectrum Sensing |
| Tbps | Terabit-per-second |
References
1. Imoize AL, Adedeji O, Tandiya N, Shetty S. 6G enabled smart infrastructure for sustainable society: opportunities. Chall Res Roadmap. Sensors. 2021;21(5):1709. doi:10.3390/s21051709. [Google Scholar] [PubMed] [CrossRef]
2. Pennanen H, Hänninen T, Tervo O, Tölli A, Latva-Aho M. 6G: the intelligent network of everything. IEEE Access. 2025;13:1319–421. doi:10.1109/ACCESS.2024.3521579. [Google Scholar] [CrossRef]
3. Quy VK, Chehri A, Quy NM, Han ND, Ban NT. Innovative trends in the 6G era: a comprehensive survey of architecture, applications, technologies, and challenges. IEEE Access. 2023;11:39824–44. doi:10.1109/ACCESS.2023.3269297. [Google Scholar] [CrossRef]
4. Salih QM, Rahman MA, Firdaus A, Jassim MR, Kahtan H, Zain JM, et al. A review and bibliometric analysis of the current studies for the 6G networks. Comput Model Eng Sci. 2024;140(3):2165–206. doi:10.32604/cmes.2024.028132. [Google Scholar] [CrossRef]
5. Meena P, Pal MB, Jain PK, Pamula R. 6G communication networks: introduction, vision, challenges, and future directions. Wirel Pers Commun. 2022;125(2):1097–123. doi:10.1007/s11277-022-09590-5. [Google Scholar] [CrossRef]
6. Chen Z, Zhang Z, Yang Z. Big AI models for 6G wireless networks: opportunities, challenges, and research directions. IEEE Wirel Commun. 2024;31(5):164–72. doi:10.1109/mwc.015.2300404. [Google Scholar] [CrossRef]
7. Alshaer NA, Ismail TI. AI-driven quantum technology for enhanced 6G networks: opportunities, challenges, and future directions. J Laser Sci Applicat. 2024;1(1):21–30. doi:10.21608/jlsa.2024.290093.1004. [Google Scholar] [CrossRef]
8. Nayak S, Patgiri R. 6G communication: a vision on the potential applications. In: Edge Analytics: Select Proceedings of 26th International Conference—ADCOM 2020. Singapore: Springer; 2022 Apr. p. 203–18. [Google Scholar]
9. Kharche S, Kharche J. 6G intelligent healthcare framework: a review on role of technologies, challenges and future directions. J Mobile Multimedia. 2023;19(3):603–44. [Google Scholar]
10. Fiandrino C, Attanasio G, Fiore M, Widmer J. Toward native explainable and robust AI in 6G networks: current state, challenges and road ahead. Comput Commun. 2022;193(10):47–52. doi:10.1016/j.comcom.2022.06.036. [Google Scholar] [CrossRef]
11. Ali M, Yasir MN, Bhatti DMS, Nam H. Optimization of spectrum utilization efficiency in cognitive radio networks. IEEE Wirel Commun Lett. 2022;12(3):426–30. doi:10.1109/lwc.2022.3229110. [Google Scholar] [CrossRef]
12. Rafiqi H, Mahendru G, Gupta SH. Effect of relay-based communication on probability of detection for spectrum sensing in LoRaWAN. Wirel Pers Commun. 2023;130(4):2345–66. doi:10.1007/s11277-023-10273-y. [Google Scholar] [CrossRef]
13. Li S, Sun Y, Yue W, Yao M, Han Y, Gui G, et al. A novel multi-scale time fusion transformer for long-range spectrum occupancy prediction. IEEE Transact Vehic Technol. 2025;74(6):9299–312. doi:10.1109/TVT.2025.3540920. [Google Scholar] [CrossRef]
14. Serghini O, Semlali H, Maali A, Ghammaz A, Serrano S. 1-D convolutional neural network-based models for cooperative spectrum sensing. Future Internet. 2023;16(1):14. [Google Scholar]
15. Quy VK, Nguyen DC, Van Anh D, Quy NM. Federated learning for green and sustainable 6G IIoT applications. Int Things. 2024;25:101061. doi:10.1016/j.iot.2024.101061. [Google Scholar] [CrossRef]
16. Liu Y, Yi W, Ding Z, Liu X, Dobre OA, Al-Dhahir N. Developing NOMA to next generation multiple access: future vision and research opportunities. IEEE Wirel Commun. 2022;29(6):120–7. doi:10.1109/mwc.007.2100553. [Google Scholar] [CrossRef]
17. Cui Q, You X, Wei N, Nan G, Zhang X, Zhang J, et al. Overview of AI and communication for 6G network: fundamentals, challenges, and future research opportunities. Sci China Inf Sci. 2025;68(7):171301. doi:10.1007/s11432-024-4337-1. [Google Scholar] [CrossRef]
18. Jiang X, Hou P, Zhu H, Li B, Wang Z, Ding H. Dynamic and intelligent edge server placement based on deep reinforcement learning in mobile edge computing. Ad Hoc Netw. 2023;145(8):103172. doi:10.1016/j.adhoc.2023.103172. [Google Scholar] [CrossRef]
19. Hossain MA, Liu W, Ansari N. Computation-efficient offloading and power control for MEC in IoT networks by meta-reinforcement learning. IEEE Internet Things J. 2024;11(9):16722–30. doi:10.1109/jiot.2024.3355023. [Google Scholar] [CrossRef]
20. Chu S, Gao C, Xu M, Ye K, Xiao Z, Xu C. Efficient multi-task computation offloading game for mobile edge computing. IEEE Trans Serv Comput. 2023;17(1):30–46. doi:10.1109/TSC.2023.3332140. [Google Scholar] [CrossRef]
21. Zhou H, Zhang Z, Wu Y, Dong M, Leung VC. Energy efficient joint computation offloading and service caching for mobile edge computing: a deep reinforcement learning approach. IEEE Trans Green Commun Netw. 2022;7(2):950–61. doi:10.1109/tgcn.2022.3186403. [Google Scholar] [CrossRef]
22. Li Y, Wang T, Wu Y, Jia W. Optimal dynamic spectrum allocation-assisted latency minimization for multiuser mobile edge computing. Digit Commun Netw. 2022;8(3):247–56. doi:10.1016/j.dcan.2021.10.008. [Google Scholar] [CrossRef]
23. Liu Z, Lan Q, Huang K. Resource allocation for multiuser edge inference with batching and early exiting. IEEE J Sel Areas Commun. 2023;41(4):1186–200. doi:10.1109/jsac.2023.3242724. [Google Scholar] [CrossRef]
24. Yan M, Luo M, Chan CA, Gygax AF, Li C. Energy-efficient content fetching strategies in cache-enabled D2D networks via an Actor-Critic reinforcement learning structure. IEEE Trans Vehicular Technol. 2024;73(11):17485–95. doi:10.1109/tvt.2024.3419012. [Google Scholar] [CrossRef]
25. Clerckx B, Kim J, Choi KW, Kim DI. Foundations of wireless information and power transfer: theory, prototypes, and experiments. Proc IEEE. 2022;110(1):8–30. doi:10.1109/jproc.2021.3132369. [Google Scholar] [CrossRef]
26. Hasan SM, Mahata K, Hyder MM. Some new perspectives on the multi-user detection in uplink grant-free NOMA using deep neural network. Authorea Preprints. 2023 [Google Scholar]
27. Lyu X, Aditya S, Kim J, Clerckx B. Rate-splitting multiple access: the first prototype and experimental validation of its superiority over SDMA and NOMA. IEEE Trans Wirel Commun. 2024;23(8):9986–10000. doi:10.1109/twc.2024.3367891. [Google Scholar] [CrossRef]
28. Clerckx B, Mao Y, Jorswieck EA, Yuan J, Love DJ, Erkip E, et al. A primer on rate-splitting multiple access: tutorial, myths, and frequently asked questions. IEEE J Sel Areas Commun. 2023;41(5):1265–308. doi:10.1109/jsac.2023.3242718. [Google Scholar] [CrossRef]
29. Mao Y, Dizdar O, Clerckx B, Schober R, Popovski P, Poor HV. Rate-splitting multiple access: fundamentals, survey, and future research trends. IEEE Communicat Surv Tutor. 2022;24(4):2073–126. doi:10.1109/comst.2022.3191937. [Google Scholar] [CrossRef]
30. Bhide P, Shetty D, Mikkili S. Review on 6G communication and its architecture, technologies included, challenges, security challenges and requirements, applications, with respect to AI domain. IET Quant Communicat. 2025;6(1):e12114. doi:10.1049/qtc2.12114. [Google Scholar] [CrossRef]
31. Maduranga MWP, Tilwari V, Rathnayake RMMR, Sandamini C. AI-enabled 6G internet of things: opportunities, key technologies, challenges, and future directions. Telecom. 2024 Aug;5(3):804–22. MDPI. doi:10.3390/telecom5030041. [Google Scholar] [CrossRef]
32. Chataut R, Nankya M, Akl R. 6G networks and the AI revolution—exploring technologies, applications, and emerging challenges. Sensors. 2024;24(6):1888. doi:10.3390/s24061888. [Google Scholar] [PubMed] [CrossRef]
33. Shah AS, Karabulut MA, Cinar E, Rabie K. A survey on fluid antenna multiple access for 6G: a new multiple access technology that provides great diversity in a small space. IEEE Access. 2024;12:88410–25. doi:10.1109/access.2024.3418291. [Google Scholar] [CrossRef]
34. Gbenga-Ilori A, Imoize AL, Noor K, Adebolu-Ololade PO. Artificial intelligence empowering dynamic spectrum access in advanced wireless communications: a comprehensive overview. AI. 2025;6(6):126. doi:10.3390/ai6060126. [Google Scholar] [CrossRef]
35. Basu A, Dash SP, Kaushik A, Ghose D, Di Renzo M, Eldar YC. Performance analysis of RIS-aided index modulation with greedy detection over Rician fading channels. IEEE Trans Wirel Commun. 2024;23(8):8465–79. doi:10.1109/TWC.2024.3350921. [Google Scholar] [CrossRef]
36. Duranay AE, Memisoglu E, Özbakiş B, Arslan H. Phase rotation approach with mixed-numerology architecture for PAPR reduction in 5G and beyond. IEEE Access. 2023;11:48113–22. doi:10.1109/ACCESS.2023.3272044. [Google Scholar] [CrossRef]
37. Mi H, Ai B, He R, Bodi A, Caromi R, Wang J, et al. Measurement-based prediction of mmWave channel parameters using deep learning and point cloud. IEEE Open J Vehic Technol. 2024;5:1059–72. doi:10.1109/ojvt.2024.3436857. [Google Scholar] [CrossRef]
38. Maiwald T, Li T, Hotopan GR, Kolb K, Disch K, Potschka J, et al. A review of integrated systems and components for 6G wireless communication in the D-band. Proc IEEE. 2023;111(3):220–56. doi:10.1109/jproc.2023.3240127. [Google Scholar] [CrossRef]
39. Akbar MS, Hussain Z, Ikram M, Sheng QZ, Mukhopadhyay SC. On challenges of sixth-generation (6G) wireless networks: a comprehensive survey of requirements, applications, and security issues. J Netw Comput Appl. 2025;233:104040. doi:10.1016/j.jnca.2024.104040. [Google Scholar] [CrossRef]
40. Petrov V, Guerboukha H, Mittleman DM, Singh A. Wavefront hopping: an enabler for reliable and secure near field terahertz communications in 6G and beyond. IEEE Wirel Commun. 2024;31(1):48–55. doi:10.1109/mwc.003.2300310. [Google Scholar] [CrossRef]
41. Ishteyaq I, Muzaffar K, Shafi N, Alathbah MA. Unleashing the power of tomorrow: exploration of next frontier with 6G networks and cutting edge technologies. IEEE Access. 2024;12:29445–63. doi:10.1109/access.2024.3367976. [Google Scholar] [CrossRef]
42. Wang W, Cui Y, Yu Y, Wang J, Wang C, Hou H, et al. Indoor organic photovoltaic module with 30.6% efficiency for efficient wireless power transfer. Nano Energy. 2024;128:109893. doi:10.1016/j.nanoen.2024.109893. [Google Scholar] [CrossRef]
43. Lei W, Wang Y, Liang Z, Feng J, Zhang W, Fang J, et al. Asymmetric additive-assisted organic solar cells with much better energy harvesting and wireless communication performance. Adv Energy Mater. 2023;13(40):2301755. doi:10.1002/aenm.202301755. [Google Scholar] [CrossRef]
44. Soltani MD, Sarbazi E, Bamiedakis N, De Souza P, Kazemi H, Elmirghani JM, et al. Safety analysis for laser-based optical wireless communications: a tutorial. Proc IEEE. 2022;110(8):1045–72. doi:10.1109/jproc.2022.3181968. [Google Scholar] [CrossRef]
45. Siddiky MNA, Rahman ME, Uzzal MS. Beyond 5G: a comprehensive exploration of 6G wireless communication technologies; 2024. doi:10.20944/preprints202405.0715.v1. [Google Scholar] [CrossRef]
46. Paul D, Prince IA, Islam MS, Ahamed MT, Sarker MNR, Adhikary A. Revolutionizing connectivity through 5G technology. Int J Adv Eng Res Sci. 2025;12(2):591472. [Google Scholar]
47. Zhao Q, Zou H, Tian Y, Bariah L, Mouhouche B, Bader F, et al. Artificial intelligence-enabled dynamic spectrum management. Intell Spect Manag: Towards 6G. 2025;1:73–89. doi:10.1002/9781394201235.ch4. [Google Scholar] [CrossRef]
48. Basharat S, Hassan SA, Mahmood A, Ding Z, Gidlund M. Reconfigurable intelligent surface-assisted backscatter communication: a new frontier for enabling 6G IoT networks. IEEE Wirel Commun. 2022;29(6):96–103. doi:10.1109/mwc.009.2100423. [Google Scholar] [CrossRef]
49. Ghaderibaneh M, Zhan C, Gupta H. Deepalloc: deep learning approach to spectrum allocation in shared spectrum systems. IEEE Access. 2024;12(6):8432–48. doi:10.1109/access.2024.3352034. [Google Scholar] [CrossRef]
50. Duong TQ, Van Huynh D, Khosravirad SR, Sharma V, Dobre OA, Shin H. From digital twin to metaverse: the role of 6G ultra-reliable and low-latency communications with multi-tier computing. IEEE Wirel Commun. 2023;30(3):140–6. doi:10.1109/mwc.014.2200371. [Google Scholar] [CrossRef]
51. Wu B, Zhang J, Yuan J, Zeng Y, Zhan P, Yin Y, et al. DT-CTFP: 6G-enabled digital twin collaborative traffic flow prediction. IEEE Trans Intell Transp Syst. 2025. doi:10.1109/tits.2025.3582356. [Google Scholar] [CrossRef]
52. Din IU, Almogren A, Kim BS. Blockchain and 6G: pioneering new dimensions in metaverse marketing. IEEE Access. 2024;12:108263–74. doi:10.1109/ACCESS.2024.3438842. [Google Scholar] [CrossRef]
53. Li M, Liu W, Lei J. A review on orthogonal time-frequency space modulation: state-of-art, hotspots and challenges. Comput Netw. 2023;224:109597. doi:10.1016/j.comnet.2023.109597. [Google Scholar] [CrossRef]
54. Ren Y, Shen Y, Zhang L, Kristensen AT, Balatsoukas-Stimming A, Boutillon E, et al. High-throughput and flexible belief propagation list decoder for polar codes. IEEE Trans Signal Process. 2024;72:1158–74. doi:10.1109/tsp.2024.3361073. [Google Scholar] [CrossRef]
55. Ren Y, Kristensen AT, Shen Y, Balatsoukas-Stimming A, Zhang C, Burg A. A sequence repetition node-based successive cancellation list decoder for 5G polar codes: algorithm and implementation. IEEE Trans Signal Process. 2022;70:5592–607. doi:10.1109/tsp.2022.3216921. [Google Scholar] [CrossRef]
56. Salahdine F, Han T, Zhang N. 5G, 6G, and beyond: recent advances and future challenges. Ann Telecommunicat. 2023;78(9):525–49. doi:10.1007/s12243-022-00938-3. [Google Scholar] [CrossRef]
57. Ghosh J, Ra IH, Singh S, Haci H, Al-Utaibi KA, Sait SM. On the comparison of optimal NOMA and OMA in a paradigm shift of emerging technologies. IEEE Access. 2022;10:11616–32. doi:10.1109/ACCESS.2022.3146349. [Google Scholar] [CrossRef]
58. Yang W, Li M, Liu Q. A practical channel estimation strategy for XL-MIMO communication systems. IEEE Commun Lett. 2023;27(6):1580–3. doi:10.1109/LCOMM.2023.3266821. [Google Scholar] [CrossRef]
59. Ngo HQ, Interdonato G, Larsson EG, Caire G, Andrews JG. Ultradense cell-free massive MIMO for 6G: technical overview and open questions. Proc IEEE. 2024;112(7):805–31. doi:10.1109/JPROC.2024.3393514. [Google Scholar] [CrossRef]
60. Shin H, Park S, Kim L, Kim J, Kim T, Song Y, et al. The future service scenarios of 6G telecommunications technology. Telecommun Policy. 2024;48(2):102678. doi:10.1016/j.telpol.2023.102678. [Google Scholar] [CrossRef]
61. Chen S, Zhang J, Björnson E, Demir ÖT, Ai B. Energy-efficient cell-free massive MIMO through sparse large-scale fading processing. IEEE Trans Wirel Commun. 2023;22(12):9374–89. doi:10.1109/twc.2023.3270299. [Google Scholar] [CrossRef]
62. Tera SP, Chinthaginjala R, Pau G, Kim TH. Towards 6G: an overview of the next generation of intelligent network connectivity. IEEE Access. 2024;13:925–61. doi:10.1109/ACCESS.2024.3523327. [Google Scholar] [CrossRef]
63. Wang Z, Du Y, Wei K, Han K, Xu X, Wei G, et al. Vision, application scenarios, and key technology trends for 6G mobile communications. Sci China Inf Sci. 2022;65(5):151301. doi:10.1007/s11432-021-3351-5. [Google Scholar] [CrossRef]
64. Alexandropoulos GC, Shlezinger N, Alamzadeh I, Imani MF, Zhang H, Eldar YC. Hybrid reconfigurable intelligent metasurfaces: enabling simultaneous tunable reflections and sensing for 6G wireless communications. IEEE Vehicular Technol Mag. 2023;19(1):75–84. doi:10.1109/mvt.2023.3332580. [Google Scholar] [CrossRef]
65. Deng R, Zhang Y, Zhang H, Di B, Zhang H, Song L. Reconfigurable holographic surface: a new paradigm to implement holographic radio. IEEE Vehicular Technol Mag. 2023;18(1):20–8. doi:10.1109/MVT.2022.3233157. [Google Scholar] [CrossRef]
66. Yin L, Clerckx B. Rate-splitting multiple access for satellite-terrestrial integrated networks: benefits of coordination and cooperation. IEEE Trans Wirel Commun. 2022;22(1):317–32. doi:10.1109/TWC.2022.3192980. [Google Scholar] [CrossRef]
67. Liu Y, Clerckx B, Popovski P. Network slicing for eMBB, URLLC, and mMTC: an uplink rate-splitting multiple access approach. IEEE Trans Wirel Commun. 2023;23(3):2140–52. doi:10.1109/twc.2023.3295804. [Google Scholar] [CrossRef]
68. Turpati S, Geetha Rani B, Prabu AV, Mukherjee A, Jha S, Swamy KCT. Review of 6G wireless communication system with artificial intelligence. Internet Technol Letters. 2025;8(6):e70127. doi:10.1002/itl2.70127. [Google Scholar] [CrossRef]
69. Shi Y, Lian L, Shi Y, Wang Z, Zhou Y, Fu L, et al. Machine learning for large-scale optimization in 6G wireless networks. IEEE Communicat Surv Tutor. 2023;25(4):2088–132. doi:10.1109/COMST.2023.3300664. [Google Scholar] [CrossRef]
70. Ohtsuki T. Machine learning in 6G wireless communications. IEICE Trans Commun. 2023;106(2):75–83. doi:10.1587/transcom.2022cei0002. [Google Scholar] [CrossRef]
71. Noor K, Imoize AL, Li CT, Weng CY. A review of machine learning and transfer learning strategies for intrusion detection systems in 5G and beyond. Mathematics. 2025;13(7):1088. doi:10.3390/math13071088. [Google Scholar] [CrossRef]
72. Sandeepa C, Siniarski B, Kourtellis N, Wang S, Liyanage M. A survey on privacy for B5G/6G: new privacy challenges, and research directions. J Ind Inf Integr. 2022;30:100405. doi:10.1016/j.jii.2022.100405. [Google Scholar] [CrossRef]
73. Qi Y, Hossain MS. Semi-supervised Federated Learning for Digital Twin 6G-enabled IIoT: a Bayesian estimated approach. J Adv Res. 2024;66:47–57. doi:10.1016/j.jare.2024.02.012. [Google Scholar] [PubMed] [CrossRef]
74. Ajani TS, Imoize AL, Atayero AA. An overview of machine learning within embedded and mobile devices-optimizations and applications. Sensors. 2021;21(13):4412. doi:10.3390/s21134412. [Google Scholar] [PubMed] [CrossRef]
75. Javaid M, Haleem A, Singh RP, Suman R. 5G technology for healthcare: features, serviceable pillars, and applications. Intell Pharm. 2023;1(1):2–10. doi:10.1016/j.ipha.2023.04.001. [Google Scholar] [CrossRef]
76. Zhao Z, Wang J, Hong W, Quek TQ, Ding Z, Peng M. Ensemble federated learning with non-IID data in wireless networks. IEEE Trans Wirel Commun. 2023;23(4):3557–71. doi:10.1109/TWC.2023.3309376. [Google Scholar] [CrossRef]
77. Banafaa M, Shayea I, Din J, Azmi MH, Alashbi A, Daradkeh YI, et al. 6G mobile communication technology: requirements, targets, applications, challenges, advantages, and opportunities. Alex Eng J. 2023;64:245–74. doi:10.1016/j.aej.2022.08.017. [Google Scholar] [CrossRef]
78. Abd Elaziz M, Al-qaness MA, Dahou A, Alsamhi SH, Abualigah L, Ibrahim RA, et al. Evolution toward intelligent communications: impact of deep learning applications on the future of 6G technology. Wiley Interdiscip Rev Data Min Know Disc. 2024;14(1):e1521. doi:10.1002/widm.1521. [Google Scholar] [CrossRef]
79. Zhu M, Zhang J, Hua B, Lei M, Cai Y, Tian L, et al. Ultra-wideband fiber-THz-fiber seamless integration communication system toward 6G: architecture, key techniques, and testbed implementation. Sci China Inf Sci. 2023;66(1):113301. doi:10.1007/s11432-022-3565-3. [Google Scholar] [CrossRef]
80. Mohjazi L, Selim B, Tatipamula M, Imran MA. The journey toward 6G: a digital and societal revolution in the making. IEEE Inter Things Mag. 2024;7(2):119–28. doi:10.1109/iotm.001.2300119. [Google Scholar] [CrossRef]
81. Xin J, Xu W, Cao B, Wang T, Zhang S. A deep-learning-based MAC for integrating channel access, rate adaptation, and channel switch. Digit Commun Netw. 2024;11(4):1042–54. doi:10.1016/j.dcan.2024.10.010. [Google Scholar] [CrossRef]
82. Liu R, Li M, Luo H, Liu Q, Swindlehurst AL. Integrated sensing and communication with reconfigurable intelligent surfaces: opportunities, applications, and future directions. IEEE Wirel Commun. 2023;30(1):50–7. doi:10.1109/MWC.002.2200206. [Google Scholar] [CrossRef]
83. Murugan R, Yenduri G, Maran P, Reddy Gadekallu T. The synergy of artificial intelligence and blockchain in 6G spectrum management. Intell Spect Manag: Tow. 2025;6G:237–62. doi:10.1002/9781394201235.ch10. [Google Scholar] [CrossRef]
84. Chang HH, Song Y, Doan TT, Liu L. Federated multi-agent deep reinforcement learning (fed-madrl) for dynamic spectrum access. IEEE Trans Wirel Commun. 2023;22(8):5337–48. doi:10.1109/TWC.2022.3233436. [Google Scholar] [CrossRef]
85. Pei E, Huang Y, Zhang L, Li Y, Zhang J. Intelligent access to unlicensed spectrum: a mean field based deep reinforcement learning approach. IEEE Trans Wirel Commun. 2022;22(4):2325–37. doi:10.1109/TWC.2022.3210955. [Google Scholar] [CrossRef]
86. Zheng Q, Tian X, Yu Z, Wang H, Elhanashi A, Saponara S. DL-PR: generalized automatic modulation classification method based on deep learning with priori regularization. Eng Appl Artif Intell. 2023;122(11):106082. doi:10.1016/j.engappai.2023.106082. [Google Scholar] [CrossRef]
87. Van Luong T, Shlezinger N, Xu C, Hoang TM, Eldar YC, Hanzo L. Deep learning based successive interference cancellation for the non-orthogonal downlink. IEEE Trans Vehicular Technol. 2022;71(11):11876–88. doi:10.1109/tvt.2022.3193201. [Google Scholar] [CrossRef]
88. Yang B, Cao X, Huang C, Guan YL, Yuen C, Di Renzo M, et al. Spectrum-learning-aided reconfigurable intelligent surfaces for green 6G networks. IEEE Netw. 2022;35(6):20–6. doi:10.1109/mnet.110.2100301. [Google Scholar] [CrossRef]
89. Flandermeyer SA, Mattingly RG, Metcalf JG. Deep reinforcement learning for cognitive radar spectrum sharing: a continuous control approach. IEEE Transact Radar Syst. 2024;2:125–37. doi:10.1109/TRS.2024.3353112. [Google Scholar] [CrossRef]
90. Chen W, Lin X, Lee J, Toskala A, Sun S, Chiasserini CF, et al. 5G-advanced toward 6G: past, present, and future. IEEE J Sel Areas Commun. 2023;41(6):1592–619. doi:10.1109/JSAC.2023.3274037. [Google Scholar] [CrossRef]
91. Chizhik D, Du J, Valenzuela RA, Samardzija D, Kucera S, Kozlov D, et al. Directional measurements and propagation models at 28 GHz for reliable factory coverage. IEEE Trans Antennas Propag. 2022;70(10):9596–606. doi:10.1109/tap.2022.3177546. [Google Scholar] [CrossRef]
92. Wang C, Li Y, Li Z, Pang D, Deng B, Liu M, et al. A compact analog beamforming receiver for 8-element by 8-beam massive multi-user MIMO in 65 nm CMOS. Microelectron J. 2025;163(4):106747. doi:10.1016/j.mejo.2025.106747. [Google Scholar] [CrossRef]
93. Sheraz M, Chuah TC, Lee YL, Alam MM, Han Z. A comprehensive survey on revolutionizing connectivity through artificial intelligence-enabled digital twin network in 6G. IEEE Access. 2024;12(21):49184–215. doi:10.1109/access.2024.3384272. [Google Scholar] [CrossRef]
94. Sundar R, Amir M, Subramanian R, Prabakar D, Giri J, Balachandran G, et al. Spectral energy balancing system with massive MIMO based hybrid beam forming for wireless 6G communication using dual deep learning model. Heliyon. 2024;10(4):e26085. doi:10.1016/j.heliyon.2024.e26085. [Google Scholar] [PubMed] [CrossRef]
95. Tshakwanda PM, Arzo ST, Devetsikiotis M. Advancing 6G network performance: ai/mL framework for proactive management and dynamic optimal routing. IEEE Open J Comput Soc. 2024;5:303–14. doi:10.1109/ojcs.2024.3398540. [Google Scholar] [CrossRef]
96. Jiang F, Peng Y, Dong L, Wang K, Yang K, Pan C, et al. Large language model enhanced multi-agent systems for 6G communications. IEEE Wirel Commun. 2024;31(6):48–55. doi:10.1109/MWC.016.2300600. [Google Scholar] [CrossRef]
97. Andreou A, Mavromoustakis CX. 6G+ networks through enhanced efficiency and sustainability with MADDPG-driven network slicing in SoS environments. IEEE Trans Green Commun Netw. 2024;8(4):1752–61. doi:10.1109/TGCN.2024.3404500. [Google Scholar] [CrossRef]
98. Kavehmadavani F, Nguyen VD, Vu TX, Chatzinotas S. Empowering traffic steering in 6G open RAN with deep reinforcement learning. IEEE Trans Wirel Commun. 2024;23(10):12782–98. doi:10.1109/TWC.2024.3396273. [Google Scholar] [CrossRef]
99. Bajracharya R, Shrestha R, Hassan SA, Jung H, Shin H. 5G and beyond private military communication: trend, requirements, challenges and enablers. IEEE Access. 2023;11:83996–4012. doi:10.1109/ACCESS.2023.3303211. [Google Scholar] [CrossRef]
100. Jain P, Gupta A, Kumar N. A vision towards integrated 6G communication networks: promising technologies, architecture, and use-cases. Phys Commun. 2022;55(2):101917. doi:10.1016/j.phycom.2022.101917. [Google Scholar] [CrossRef]
101. Rohini P, Tripathi S, Preeti CM, Renuka A, Gonzales JLA, Gangodkar D, et al. A study on the adoption of wireless communication in big data analytics using neural networks and deep learning. In: 2022 2nd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE); 2022 Apr; Greater Noida, India; 2022. p. 1071–6. doi:10.1109/ICACITE53722.2022.9823439. [Google Scholar] [CrossRef]
102. Wang CX, You X, Gao X, Zhu X, Li Z, Zhang C, et al. On the road to 6G: visions, requirements, key technologies, and testbeds. IEEE Communicat Surv Tutor. 2023;25(2):905–74. doi:10.1109/COMST.2023.3249835. [Google Scholar] [CrossRef]
103. Jawad AT, Maaloul R, Chaari L. A comprehensive survey on 6G and beyond: enabling technologies, opportunities of machine learning and challenges. Comput Netw. 2023;237:110085. doi:10.1016/j.comnet.2023.110085. [Google Scholar] [CrossRef]
104. Ahad A, Jiangbina Z, Tahir M, Shayea I, Sheikh MA, Rasheed F. 6G and intelligent healthcare: taxonomy, technologies, open issues and future research directions. Internet of Things. 2024;25(7):101068. doi:10.1016/j.iot.2024.101068. [Google Scholar] [CrossRef]
105. Li T, Zhu K, Luong NC, Niyato D, Wu Q, Zhang Y, et al. Applications of multi-agent reinforcement learning in future internet: a comprehensive survey. IEEE Communicat Surv Tutor. 2022;24(2):1240–79. doi:10.1109/comst.2022.3160697. [Google Scholar] [CrossRef]
106. Alhammadi A, Shayea I, El-Saleh AA, Azmi MH, Ismail ZH, Kouhalvandi L, et al. Artificial intelligence in 6G wireless networks: opportunities, applications, and challenges. Int J Intell Syst. 2024;2024(1):8845070. [Google Scholar]
107. Gong T, Gavriilidis P, Ji R, Huang C, Alexandropoulos GC, Wei L, et al. Holographic MIMO communications: theoretical foundations, enabling technologies, and future directions. IEEE Communicat Surv Tutor. 2023;26(1):196–257. doi:10.1109/COMST.2023.3309529. [Google Scholar] [CrossRef]
108. Zheng Z, Jiang S, Feng R, Ge L, Gu C. An adaptive backoff selection scheme based on Q-learning for CSMA/CA. Wirel Netw. 2023;29(4):1899–909. doi:10.1007/s11276-023-03257-0. [Google Scholar] [CrossRef]
109. Wang X, Sun G, Xin Y, Liu T, Xu Y. Deep transfer reinforcement learning for beamforming and resource allocation in multi-cell MISO-OFDMA systems. IEEE Trans Signal Inf Process Over Netw. 2022;8:815–29. doi:10.1109/tsipn.2022.3208432. [Google Scholar] [CrossRef]
110. Ahmed M, Raza S, Soofi AA, Khan F, Khan WU, Abideen SZU, et al. Active reconfigurable intelligent surfaces: expanding the frontiers of wireless communication—a survey. IEEE Communicat Surv Tutor. 2024;27(2):839–69. doi:10.1109/COMST.2024.3423460. [Google Scholar] [CrossRef]
111. Xu M, Du H, Niyato D, Kang J, Xiong Z, Mao S, et al. Unleashing the power of edge-cloud generative AI in mobile networks: a survey of AIGC services. IEEE Communicat Surv Tutor. 2024;26(2):1127–70. doi:10.1109/COMST.2024.3353265. [Google Scholar] [CrossRef]
112. Imoize AL, Obakhena HI, Anyasi FI, Isabona J, Ojo S, Faruk N. Reconfigurable intelligent surfaces enabling 6G wireless communication systems: use cases and technical considerations. In: 2022 5th Information Technology for Education and Development (ITED); 2022 Nov 1–3; Abuja, Nigeria. p. 1–7. doi:10.1109/ITED56637.2022.10051543. [Google Scholar] [CrossRef]
113. Chaccour C, Saad W, Debbah M, Han Z, Poor HV. Less data, more knowledge: building next-generation semantic communication networks. IEEE Communicat Surv Tutor. 2024;27(1):37–76. doi:10.1109/COMST.2024.3412852. [Google Scholar] [CrossRef]
114. Hasan KMB, Sajid M, Lapina MA, Shahid M, Kotecha K. Blockchain technology meets 6G wireless networks: a systematic survey. Alex Eng J. 2024;92:199–220. doi:10.1016/j.aej.2024.02.031. [Google Scholar] [CrossRef]
115. Kludze A, Kono J, Mittleman DM, Ghasempour Y. A frequency-agile retrodirective tag for large-scale sub-terahertz data backscattering. Nat Commun. 2024;15(1):8756. doi:10.1038/s41467-024-53035-5. [Google Scholar] [PubMed] [CrossRef]
116. Kaplan A, Vieira J, Larsson EG. Direct link interference suppression for bistatic backscatter communication in distributed MIMO. IEEE Trans Wirel Commun. 2023;23(2):1024–36. doi:10.1109/twc.2023.3285250. [Google Scholar] [CrossRef]
117. Nguyen CT, Saputra YM, Van Huynh N, Nguyen TN, Hoang DT, Nguyen DN, et al. Emerging technologies for 6G non-terrestrial-networks: from academia to industrial applications. IEEE Open J Communicat Soc. 2024;5:3852–85. doi:10.1109/ojcoms.2024.3418574. [Google Scholar] [CrossRef]
118. Kanthavel R, Dhaya R. Research review on AI-powered 6G as sixth-sense technologies. AI Large Scale Communicat Netw. 2025:373–94. doi:10.4018/979-8-3693-6552-6.ch017. [Google Scholar] [CrossRef]
119. Thantharate A, Beard C. ADAPTIVE6G: adaptive resource management for network slicing architectures in current 5G and future 6G systems. J Netw Syst Manag. 2023;31(1):9. doi:10.1007/s10922-022-09693-1. [Google Scholar] [CrossRef]
120. Cai Y, Cheng P, Chen Z, Ding M, Vucetic B, Li Y. Deep reinforcement learning for online resource allocation in network slicing. IEEE Trans Mob Comput. 2023;23(6):7099–116. doi:10.1109/TMC.2023.3328950. [Google Scholar] [CrossRef]
121. Syed QS, Hussain S, Bashir I. Artificial intelligence and machine learning as pioneers in advancing 5G/6G network capabilities: evolution, introduction, antecedents, and consequences. 5G/6G Adv Commun Technol Agil Manag. 2025:215–32. doi:10.4018/979-8-3693-6725-4.ch009. [Google Scholar] [CrossRef]
122. Mahamod U, Mohamad H, Shayea I, Othman M, Asuhaimi FA. Handover parameter for self-optimisation in 6G mobile networks: a survey. Alex Eng J. 2023;78(4):104–19. doi:10.1016/j.aej.2023.07.015. [Google Scholar] [CrossRef]
123. Imoize AL, Obakhena HI, Anyasi FI, Sur SN. A review of energy efficiency and power control schemes in ultra-dense cell-free massive MIMO systems for sustainable 6G wireless communication. Sustainability. 2022;14(17):11100. doi:10.3390/su141711100. [Google Scholar] [CrossRef]
124. Du X, Wang T, Feng Q, Ye C, Tao T, Wang L, et al. Multi-agent reinforcement learning for dynamic resource management in 6G in-X subnetworks. IEEE Trans Wirel Commun. 2022;22(3):1900–14. doi:10.1109/twc.2022.3207918. [Google Scholar] [CrossRef]
125. Zou S, Wu J, Yu H, Wang W, Huang L, Ni W, et al. Efficiency-optimized 6G: a virtual network resource orchestration strategy by enhanced particle swarm optimization. Digit Commun Netw. 2024;10(5):1221–33. doi:10.1016/j.dcan.2023.06.008. [Google Scholar] [CrossRef]
126. Alhussien N, Gulliver TA. Toward AI-enabled green 6G networks: a resource management perspective. IEEE Access. 2024;12:132972–95. doi:10.1109/access.2024.3460656. [Google Scholar] [CrossRef]
127. Chaccour C, Saad W, Debbah M, Poor HV. Joint sensing, communication, and AI: a trifecta for resilient THz user experiences. IEEE Trans Wirel Commun. 2024;23(9):11444–60. doi:10.1109/twc.2024.3382192. [Google Scholar] [CrossRef]
128. Chaaya CB, Bennis M. RIS phase optimization via generative flow networks. IEEE Wirel Commun Lett. 2024;13(7):1988–92. doi:10.1109/lwc.2024.3400127. [Google Scholar] [CrossRef]
129. Xiao Y, Liao Y, Li Y, Shi G, Poor HV, Saad W, et al. Reasoning over the air: a reasoning-based implicit semantic-aware communication framework. IEEE Trans Wirel Commun. 2023;23(4):3839–55. doi:10.1109/twc.2023.3312115. [Google Scholar] [CrossRef]
130. Du H, Wang J, Niyato D, Kang J, Xiong Z, Kim DI. AI-generated incentive mechanism and full-duplex semantic communications for information sharing. IEEE J Sel Areas Commun. 2023;41(9):2981–97. doi:10.36227/techrxiv.22209178. [Google Scholar] [CrossRef]
131. Du B, Du H, Liu H, Niyato D, Xin P, Yu J, et al. YOLO-based semantic communication with generative AI-aided resource allocation for digital twins construction. IEEE Internet Things J. 2023;11(5):7664–78. doi:10.1109/jiot.2023.3317629. [Google Scholar] [CrossRef]
132. Lin Y, Gao Z, Du H, Niyato D, Kang J, Jamalipour A, et al. A unified framework for integrating semantic communication and AI-generated content in metaverse. IEEE Netw. 2023;38(4):174–81. doi:10.1109/mnet.2023.3321539. [Google Scholar] [CrossRef]
133. Khoramnejad F, Hossain E. Generative AI for the optimization of next-generation wireless networks: basics, state-of-the-art, and open challenges. IEEE Communicat Surv Tutor. 2025. doi:10.1109/COMST.2025.3535554. [Google Scholar] [CrossRef]
134. Raviv T, Park S, Simeone O, Eldar YC, Shlezinger N. Online meta-learning for hybrid model-based deep receivers. IEEE Trans Wirel Commun. 2023;22(10):6415–31. doi:10.1109/TWC.2023.3241841. [Google Scholar] [CrossRef]
135. Wu M, Gao Z, Huang Y, Xiao Z, Ng DWK, Zhang Z. Deep learning-based rate-splitting multiple access for reconfigurable intelligent surface-aided tera-hertz massive MIMO. IEEE J Sel Areas Commun. 2023;41(5):1431–51. doi:10.1109/jsac.2023.3240781. [Google Scholar] [CrossRef]
136. Samanta RK, Sadhukhan B, Samaddar H, Sarkar S, Koner C, Ghosh M. Scope of machine learning applications for addressing the challenges in next-generation wireless networks. CAAI Trans Intell Technol. 2022;7(3):395–418. doi:10.1049/cit2.12114. [Google Scholar] [CrossRef]
137. Chege S, Walingo T. Deep learning multi-user detection for PD-SCMA. IEEE Access. 2024;12:75136–45. doi:10.1109/ACCESS.2024.3405192. [Google Scholar] [CrossRef]
138. Jagatheesaperumal SK, Ahmad I, Höyhtyä M, Khan S, Gurtov A. Deep learning frameworks for cognitive radio networks: review and open research challenges. J Netw Comput Appl. 2025;233:104051. doi:10.1016/j.jnca.2024.104051. [Google Scholar] [CrossRef]
139. Yeh C, Do Jo G, Ko YJ, Chung HK. Perspectives on 6G wireless communications. ICT Express. 2023;9(1):82–91. doi:10.1016/j.icte.2021.12.017. [Google Scholar] [CrossRef]
140. He H, Fei S, Yan Z. Advancing 5G security and privacy with AI: a survey. ACM Comput Surv. 2025;58(2):1–36. doi:10.1145/3744555. [Google Scholar] [CrossRef]
141. Suomalainen J, Ahmad I, Shajan A, Savunen T. Cybersecurity for tactical 6G networks: threats, architecture, and intelligence. Future Gener Comput Syst. 2025;162(4):107500. doi:10.1016/j.future.2024.107500. [Google Scholar] [CrossRef]
142. Oleiwi HW, Mhawi DN, Al-Raweshidy H. A meta-model to predict and detect malicious activities in 6G-structured wireless communication networks. Electronics. 2023;12(3):643. doi:10.3390/electronics12030643. [Google Scholar] [CrossRef]
143. Hema M. HIDE-6G: advanced intrusion detection system for secure 6G network using deep learning. Int J Intell Eng Syst. 2024;17(5). doi:10.22266/ijies2024.1031.37. [Google Scholar] [CrossRef]
144. Sanjalawe Y, Fraihat S, Abualhaj M, Makhadmeh S, Alzubi E. A review of 6G and AI convergence: enhancing communication networks with artificial intelligence. IEEE Open J Communicat Soc. 2025;6(2):2308–55. doi:10.1109/OJCOMS.2025.3553302. [Google Scholar] [CrossRef]
145. Zawish M, Dharejo FA, Khowaja SA, Raza S, Davy S, Dev K, et al. AI and 6G into the metaverse: fundamentals, challenges and future research trends. IEEE Open J Communicat Soc. 2024;5:730–78. doi:10.1109/ojcoms.2024.3349465. [Google Scholar] [CrossRef]
146. Abdelsadek MY, Chaudhry AU, Darwish T, Erdogan E, Karabulut-Kurt G, Madoery PG, et al. Future space networks: toward the next giant leap for humankind. IEEE Trans Commun. 2022;71(2):949–1007. doi:10.1109/tcomm.2022.3228611. [Google Scholar] [CrossRef]
147. Yang Z, Xu W, Liang L, Cui Y, Qin Z, And Debbah M. On privacy, security, and trustworthiness in distributed wireless large AI models. Sci China Inf Sci. 2025;68(7):170301. doi:10.1007/s11432-024-4465-3. [Google Scholar] [CrossRef]
148. Park C, Lee J, Kim Y, Park JG, Kim H, Hong D. An enhanced AI-based network intrusion detection system using generative adversarial networks. IEEE Internet Things J. 2022;10(3):2330–45. doi:10.1109/jiot.2022.3211346. [Google Scholar] [CrossRef]
149. Yang L, Yang SX, Li Y, Lu Y, Guo T. Generative adversarial learning for trusted and secure clustering in industrial wireless sensor networks. IEEE Trans Ind Electron. 2022;70(8):8377–87. doi:10.1109/tie.2022.3212378. [Google Scholar] [CrossRef]
150. Clerckx B, Mao Y, Yang Z, Chen M, Alkhateeb A, Liu L, et al. Multiple access techniques for intelligent and multifunctional 6G: tutorial, survey, and outlook. Proc IEEE. 2024;112(7):832–79. doi:10.1109/JPROC.2024.3409428. [Google Scholar] [CrossRef]
151. Anh VTK. The rise of AI in 6G networks: a comprehensive review of opportunities, challenges, and applications. In: 2024 International Conference on Advanced Technologies for Communications (ATC); 2024 Oct; Ho Chi Minh City, Vietnam; 2024. p. 333–8. doi:10.1109/ATC63255.2024.10908115. [Google Scholar] [CrossRef]
152. Sabir B, Yang S, Nguyen D, Wu N, Abuadbba A, Suzuki H, et al. Systematic literature review of AI-enabled spectrum management in 6G and future networks. arXiv:2407.10981. 2024. doi:10.48550/arxiv.2407.10981. [Google Scholar] [CrossRef]
153. Montlouis W, Imoize AL. Fundamentals of wireless communications: massive MIMO essentials for 6G and beyond. In: Massive MIMO for future wireless communication systems: technology and applications. Hoboken, NJ, USA: John Wiley & Sons; 2025. p. 1–21. doi:10.1002/9781394228331.ch1. [Google Scholar] [CrossRef]
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools