iconOpen Access

ARTICLE

Proactive Mobility-Aware Fog Service Continuity Using Digital Twins and GRU–EWMA-Based Association Forecasting

Navjeet Kaur1, Ayush Mittal2, Saad Alahmari3,*

1 Apex Institute of Technology (CSE), Chandigarh University, Mohali, Punjab, India
2 Strategic Technology Group (STG), Infosys Ltd., Chandigarh, India
3 Department of Computer Science, Applied College, Northern Border University, Arar, Saudi Arabia

* Corresponding Author: Saad Alahmari. Email: email

(This article belongs to the Special Issue: Integrating Computing Technology of Cloud-Fog-Edge Environments and its Application)

Computers, Materials & Continua 2026, 88(1), 65 https://doi.org/10.32604/cmc.2026.079991

Abstract

Mobile fog computing must support latency-sensitive applications under dynamic user mobility and time-varying network conditions. Existing mobility-aware scheduling approaches are largely reactive and often ignore prediction uncertainty, resulting in service disruptions and inefficient task migration. This paper proposes an uncertainty-aware digital twin-based orchestration framework for proactive mobility-aware fog computing. The framework maintains real-time synchronized digital twins of users and fog nodes and integrates a hybrid Gated Recurrent Unit-Exponentially Weighted Moving Average (GRU-EWMA) mobility prediction model with fog-load forecasting to enable joint mobility- and load-aware decision-making. An entropy-based confidence mechanism is introduced to regulate proactive handover and task migration, thereby reducing unnecessary task migrations when predictions are uncertain. The proposed framework is implemented in the MobFogSim simulator and evaluated against state-of-the-art baselines. Experimental results demonstrate that the proposed approach reduces the average task delay by up to 28.1%, decreases energy consumption by up to 9.5%, and improves the task success rate to 99.1%, while incurring only a modest digital-twin computational overhead. These results confirm that integrating uncertainty-aware mobility prediction with digital twin–driven orchestration significantly enhances reliability and efficiency in mobile fog computing environments.

Keywords

Fog computing; mobile edge computing (MEC); digital twin; proactive handoff; task migration; service continuity; gated recurrent unit (GRU); exponentially weighted moving average (EWMA); MobFogSim

1  Introduction

The rapid growth of latency-sensitive mobile applications like AR/VR, interactive analytics, and real-time sensing has intensified the need for reliable computation close to end users. Fog computing addresses this demand by extending cloud capabilities to the network edge through distributed fog nodes, reducing end-to-end latency and improving user experience. However, these benefits primarily hold under low user mobility. As a user moves, the wireless path to the serving fog node may experience degraded link quality and increased propagation delay. This leads to longer response times, service interruption, and missed deadlines if service placement remains static [1]. Consequently, mobility-aware offloading and service migration are essential for sustaining quality of service (QoS) in mobile fog environments [2].

A key difficulty is that mobility and fog congestion evolve simultaneously and unpredictably. Reactive policies that only respond after the user leaves coverage or after queues build up are prone to unavoidable handoffs, ping-pong effects, and costly migrations. Moreover, decisions made solely from instantaneous measurements, such as RSSI and SNR, can be unreliable due to short-term fluctuations. Decisions made without accounting for near-future fog load may direct users toward congested nodes. Therefore, a practical control plane must (i) predict near-future user association tendencies and locations, (ii) anticipate fog-side congestion, and (iii) quantify uncertainty so that proactive actions are only triggered when predictions are sufficiently reliable.

Digital Twin (DT) technology has recently emerged as a promising paradigm for managing such complex, dynamic systems by maintaining a virtual, continuously synchronised replica of physical entities such as users, fog nodes, and network conditions with data-driven monitoring, simulation, and optimization [3,4]. While DTs were initially adopted in industrial contexts, they are increasingly used in networking and edge/fog computing environments to support latency-aware and context-aware decision-making [5]. Recent studies in [69] show that DT-assisted orchestration can improve real-time scheduling and resource allocation when accurate synchronisation and predictive analytics are available. Nevertheless, a gap remains in DT-enabled fog control for mobile users, as existing approaches often treat mobility prediction and fog load prediction separately. This shortcoming triggers proactive actions without an explicit uncertainty signal or lacks a clear hierarchical mechanism to scale decisions across local domains and inter-domain coordination.

This paper proposes a hierarchical DT-enhanced fog orchestration framework in which Local Fog Orchestrators (LFOs) manage fine-grained user and fog twins. The LFO executes real-time control, while a Global Fog Orchestrator (GFO) maintains an abstracted system-wide view for policy coordination and inter-domain support. Within each LFO domain, a predictive control pipeline is introduced that integrates (i) GRAIN, a GRU-EWMA hybrid predictor that stabilises noisy measurements via EWMA and encodes short-horizon temporal dynamics via a GRU; (ii) an entropy-based confidence score to quantify the certainty of predicted association tendencies; (iii) EWMA-based forecasting of near-future fog congestion indicators; and (iv) a mobility-load fusion decision rule to select the next serving fog node, followed by a feasibility check using the predicted user location. The resulting predictions and confidence are then used to trigger proactive handoff preparation and task migration once the user is predicted to leave the current fog coverage before task completion. To assess the QoS impact of decisions, a queueing-based response-time abstraction and an energy model for fog execution are employed.

1.1 Research Question, Hypothesis, and Contribution

Considering the challenge discussed above in mobility-aware task scheduling, the proposed work is presented to answer the identified research question: Can a DT-driven, uncertainty-aware predictive control loop that jointly forecasts user mobility and fog congestion reduce service disruption and latency under mobility while keeping orchestration overhead acceptable? Further, our hypothesis is that combining (a) noise-stabilised temporal mobility, (b) explicit confidence estimation, and (c) load-aware next-fog selection leads to more robust proactive actions than mobility-only or reactive baselines. Building upon the motivation of mobility-aware fog scheduling and migration, the main contributions of this paper are as follows:

•   The proposed work’s primary contribution is the design of a digital-twin–enabled fog orchestration framework that maintains synchronised “twin” states for synchronised fog nodes and uses those states to make proactive scheduling decisions. The focus is not merely on offloading but on building an orchestration layer that can continuously track mobility, link conditions, and fog-side congestion and then act in advance to reduce service disruption. The development of the GRAIN prediction component, which is an uncertainty-aware mobility and association predictor, is key to this effort. It combines EWMA smoothing to stabilise noisy measurements with a GRU-based stable model to learn short-horizon dynamics, producing (i) a probabilistic association tendency over candidate fog nodes and (ii) a predicted future location.

•   A third key contribution is the decision policy that fuses predicted mobility with predicted fog congestion to select the next fog node and trigger proactive migration and handoff. The model forecasts fog load using EWMA-based load prediction and combines it with the association tendency to compute a composite next-fog score.

•   The proposed model improves average task delay, energy efficiency, and reliability while incurring only a small computational overhead on the Digital Twin (DT).

1.2 Key Novel Contributions beyond Existing Work

While there have been many investigations on the design and performance of mobility-aware scheduling, digital twin-aided orchestration, and learning-enabled offloading in the context of fog and edge computing paradigms, most have addressed each of these areas in isolation without explicitly considering the integration of mobility prediction, resource congestion modeling, and uncertainty-aware decision-making in an integrated manner [1013]. Although reinforcement learning-based schedulers have been proposed for effective and adaptive offloading decisions in the context of edge computing paradigms, they have been observed to often start proactive decisions without considering the uncertainty in the prediction process [14,15]. Similarly, digital twin-aided frameworks have been proposed for monitoring and accelerating the optimization process in edge computing paradigms, without considering joint modeling and decision-making for mobility and load in an integrated manner [12,13]. The novelty in this work is not in the design and development of individual components but in the proposed uncertainty-aware decision mechanism that incorporates the joint modeling and decision-making in the context of user mobility, resource congestion in the fog environment, and uncertainty in the prediction process in an integrated manner using the digital twin control loop. Unlike existing works that have relied solely on predicting user mobility and uncertainty in the decision-making process, the proposed framework incorporates confidence-gated orchestration and mobility/load fusion to regulate proactive handoff and migration decisions in fog computing paradigms.

Prior work on service continuity in fog and MEC environments has typically addressed mobility-aware migration, resource management, and digital-twin-assisted orchestration as largely separate problems. Mobility-driven methods such as Follow-Me Cloud [16] and PROMO [17] anticipate user movement and support proactive re-association, but they do not explicitly combine mobility prediction with near-future fog congestion estimation in the final service-placement decision. Digital-twin-based approaches, on the other hand, improve monitoring and predictive control, but most do not jointly integrate (i) short-horizon mobility prediction, (ii) fog-side load forecasting, and (iii) uncertainty-aware gating of proactive migration decisions [1820]. In contrast, the proposed framework combines a synchronized digital twin control loop, a hybrid GRU-EWMA mobility predictor (GRAIN), EWMA-based fog-load forecasting, and entropy-based confidence gating to support proactive and load-aware service continuity. Table 1 summarizes the main differences with representative prior studies.

images

1.3 Organization

The remainder of this paper is organized as follows. Section 2 reviews related work on mobility-aware task scheduling in fog/edge systems, learning-based offloading, and digital-twin–assisted orchestration. Section 3 presents the proposed DT-enhanced proactive scheduling framework, including the considered system architecture presented in Section 3.1, digital-twin construction and synchronization in Section 3.2, the GRAIN-based mobility prediction together with fog-load forecasting and mobility-load fusion for next-fog selection in Section 3.3, and the proactive handoff and task-migration control logic in Section 3.4; the overall scheduling process is summarized in Algorithm 1. The queueing-based execution-delay abstraction and fog execution-energy model used for cost estimation are detailed in Section 3.6. Section 4 formulates the objective function and defines the end-to-end latency and energy surrogates used by the scheduler. Section 5 describes the MobFogSim implementation, simulation settings in Section 5.1, and evaluation metrics in Section 5.3. Section 6 reports and discusses the experimental results. Finally, Section 7 concludes the paper and outlines future research directions, while Appendix A provides a consolidated notation table.

2  Literature Review

This section is divided into two major parts: Section 2.1 discusses the conventional mobility-aware scheduling literature, and the next Section 2.2 presents literature on digital twins.

2.1 Conventional, Learning-Based, and Mobility-Aware Scheduling in Edge Systems

Along with the development of fog and edge computing technologies, cybersecurity has become a significant issue in IoT-based systems. Recent literature emphasises that interpretable and reliable intrusion detection systems must be implemented to ensure transparent, reliable decision-making in complex systems. For example, it has been shown that a reliable intrusion detection system based on hybrid optimisation techniques and Local Interpretable Model-Agnostic Explanations (LIME) achieves high detection rates while maintaining explainability. However, these methods are mostly centralised and cannot be extended to distributed systems and decision-making under uncertainties [21,22].

Task scheduling and resource management in fog and edge computing have been widely studied in recent years. Early works addressed the joint optimisation of communication and computation resources. For example, Mao et al. [23] formulated a mobile-edge computing (MEC) offloading problem that jointly schedules tasks and allocates transmit power. The authors solve the problem via a heuristic that demonstrated the benefits of edge offloading over cloud-only processing. There are subsequent research that increasingly applies machine learning (ML) techniques to tackle the complexity of scheduling under dynamic conditions. Reinforcement learning (RL), in particular, has been popular for edge scheduling because it can learn adaptive policies. Hu et al. [14,24] proposed SPEAR, a deep Q-learning-based scheduler that considers dependency constraints among tasks to optimise their placement in distributed edge-cloud environments, achieving better makespan than heuristic baselines. Further, Zhou et al. [15] employed a deep RL approach to schedule IoT tasks in a Space-Air-Ground Integrated Network (SAGIN) architecture with the goal of minimizing end-to-end delay. Their RL agent learns an offloading policy across satellite, aerial, and ground nodes, outperforming traditional algorithms in reducing latency. Similarly, Huang et al. [25] addressed deadline-aware task offloading in multi-access edge computing by modeling it as a partially observable Markov decision process and training an RL agent to meet task deadlines under uncertainty. This Partially Observable Markov Decision Process (POMDP)-based approach improved the deadline miss rate compared to greedy policies.

Given the highly dynamic and unpredictable nature of edge environments, such as user mobility and time-varying loads, several works have incorporated mobility awareness and predictive mechanisms into scheduling. Kaur et al. [17] presented PROMO, a PROactive MObility-support model for fog scheduling that anticipates user mobility and proactively assigns latency-sensitive tasks to appropriate fog nodes in advance. By leveraging trajectory prediction techniques, PROMO reduces service disruption as users move. Earlier, Zhu et al. [26] had introduced a “Fog Follow Me” strategy for vehicular fog computing, dynamically migrating or redistributing tasks among fog nodes to maintain low latency and service quality as vehicles travel. Similarly, Maleki and Mashayekhy [27] developed a mobility-aware offloading scheme that uses mobility prediction to decide whether to offload tasks to edge servers or wait, thereby reducing latency by preempting connectivity losses. In addition to software solutions, some researchers considered network handover coordination. Ngo et al. [28] proposed a coordinated container migration and base station handover strategy in MEC to maintain service continuity during user movement. Their method triggers service migration in tandem with cellular handovers, minimizing task interruptions for mobile users.

Another line of work focuses on multi-objective and meta-heuristic solutions for fog scheduling. Traditional scheduling must often balance multiple QoS metrics, leading to NP-hard optimization problems. Kaur et al. [29] formulated fog scheduling as a multi-objective optimization and proposed Task-Resource Adaptive Pairing (TRAP). TRAP uses a batching, ranking, and priority-based heuristic to pair tasks with fog nodes, reducing the search space and simultaneously minimizing delay, energy consumption, and cost. Implemented in iFogSim, TRAP achieved reductions in task delay and energy usage compared to naive scheduling. Liu et al. [30] combined bio-inspired algorithms for efficient scheduling by integrating Particle Swarm Optimisation (PSO) and Genetic Algorithm into an Artificial Bee Colony scheme. Their hybrid algorithm optimizes task allocation across fog clusters, reducing average service latency and energy consumption.

2.2 Digital Twin-Enabled Intelligent Orchestration and Emerging Work

While the above studies improve scheduling through optimization, learning, and mobility prediction, they generally do not use a continuously synchronized digital replica to support online orchestration decisions. In parallel, the concept of the Digital Twin (DT) has emerged as a promising tool for network optimization. A digital twin is a virtual replica of a physical system that is continuously synchronized with real-world data. Tao et al. [31] and Liu et al. [32] surveyed the state of the art in digital twins, highlighting their potential to provide real-time system insights and predictive analytics. The networking community has begun adopting DTs to manage edge resources under uncertainty. Sun et al. [19] introduced the concept of a Digital Twin Edge Network (DITEN) for 6G, where each edge server has a twin model estimating its state, and a system-level twin provides training data for an RL-based offloading decision engine. They formulated a latency minimization problem with migration cost constraints and solved it using an actor-critic deep RL approach with Lyapunov optimization for constraint handling. Wang et al. [20] specifically leveraged a digital twin to accelerate RL convergence for task scheduling. They proposed a DT-assisted Q-learning method where the agent can evaluate multiple actions in parallel within the twin simulator. To this end, they developed two algorithms, Digital Twin-Assisted Asynchronous Q-learning (DTAQL) and Digital Twin-Assisted Exploring Q-learning (DTEQL), which showed faster convergence than standard Q-learning. Alourani et al. [33] propose a multi-layer, closed-loop smart-city architecture that unifies (i) IoT sensing and data acquisition, (ii) edge-level preprocessing and low-latency AI inference, (iii) privacy-preserving federated learning for distributed model training across heterogeneous devices, and (iv) a synchronized digital twin layer for simulation-driven decision support. The proposed framework is demonstrated on two representative urban domains, i.e., adaptive traffic management and underground pipeline monitoring. Further, Abdallah and Alghamdi [34] present a lightweight, decentralised traffic-signal optimization approach that couples a GRU-based predictor with a DT feedback loop. The GRU module forecasts short-horizon, local congestion events from vehicle-based IoT sensor streams, while the DT validates and adaptively adjusts control actions in response to live roadway-condition changes.

Beyond single-agent systems, multi-agent and federated learning approaches have been proposed to handle distributed edge environments. Zhang et al. [35] presented an adaptive multi-agent deep RL framework with digital twins for vehicular edge networks. In their approach, each vehicle is an agent that learns cooperative offloading policies, while a coordination graph and digital twin of the network help to evaluate joint actions efficiently. This method minimized overall offloading cost and latency by enabling agents to exploit both physical and twin network feedback. Arsalan et al. [36] focused on a federated learning setting for UAV-assisted edge computing. They proposed a DT-driven Federated Deep RL algorithm (DT-AFA) for coordinating task offloading among drones in smart agriculture. Each UAV runs a deep RL agent that decides on task offloading, transmission power, and local execution, while a cloud-based digital twin of the environment enables parallel policy evaluation, and a federated server aggregates the models from the UAVs By incorporating a semantic-aware reward design and leveraging both DT simulation and federated learning, their solution improved task success rates and lowered service migration overhead.

There is also growing interest in integrating security and other QoS aspects into edge scheduling. For instance, Kesavan et al. [37] developed a Secure Edge Enabled Multi-Task Scheduling (SEE-MTS) model for IoE applications using RL. Their framework not only schedules tasks to edge nodes but also employs encryption and dynamic key generation to ensure data security during offloading. A multi-task scheduling mechanism optimizes energy allocation and queue management, and a Q-learning based algorithm minimizes overall task completion time. The result improved energy efficiency and reduced delays while maintaining a high level of security.

Table 2 summarizes the key related works. Notably, most prior works focus on either learning-based scheduling or digital twin simulation, but few combine these with hierarchical control or explicit safety constraints. In our proposed approach, a similar task and resource management problem is addressed, but we introduce a unique combination of features: (i) a hierarchical RL scheduler with meta-controller and low-level optimizer that can efficiently handle large action spaces, (ii) integration of a Digital Twin as a predictive world model for short-horizon planning, and (iii) a constrained RL mechanism with guardrails to enforce hard QoS constraints at runtime. This agentic scheduler continuously adapts to system dynamics and outperforms both purely heuristic baselines and single-level RL in our evaluations. In the following Table 2, we detail how our approach builds on and differentiates from these prior works.

images

While existing literature addresses mobility prediction, digital twin orchestration, and fog scheduling individually, there is limited work that considers mobility prediction, congestion forecasting, and decision making under uncertainty within a comprehensive control framework.

Further, existing approaches to fog scheduling can be classified as mobility-aware heuristic, learning-based, or digital twin-based [14,15,19,20]. Despite their common goal, significant differences can be observed between these approaches in how they address mobility prediction, resource awareness, and uncertainty handling. A conceptual comparison between some state-of-the-art approaches and the proposed approach is presented in Table 3.

images

3  Proposed Framework

3.1 System Overview

The proposed fog computing architecture with an integrated digital-twin (DT) control plane is shown in Fig. 1. The system is organized into three tightly-coupled layers: (i) a physical execution layer, (ii) a digital twin layer, and (iii) a hierarchical orchestration layer comprising a Local Fog Orchestrator (LFO) and a Global Fog Orchestrator (GFO). Let 𝒰 denote the set of users mobile devices and is the set of fog nodes.

images

Figure 1: System architecture.

End-user device {ui} where ui𝒰 generate computational tasks ti(t) over discrete time t{1,2,}. Each task can be executed on a nearby fog node or offloaded to the cloud data centre. The serving association of user ui at time t is denoted by ai(t){cloud}, and the set of candidate fog nodes within the communication range of user ui is 𝒩i(t). Above the physical layer, the DT layer maintains synchronized virtual replicas of key entities where each physical user ui is mirrored by a user twin u~i, and each fog node fj is mirrored by a fog twin f~j. The DT layer is updated through periodic state synchronization with a sampling period δ. Concretely, at each discrete time t, the user side and fog side generate telemetry that is mirrored into the corresponding twins, i.e., oiu(t) and ojf(t).

Further, the Local Fog Orchestrator (LFO) performs real-time control within its local fog domain, while the Global Fog Orchestrator (GFO) provides system-wide policy and coordination. The GFO exchanges Policy & Coordination messages with each LFO and interacts with the cloud data centre for long-horizon optimisation, overflow execution, and model lifecycle support. Within each local domain, the LFO: (i) receives synchronized DT states, (ii) derives prediction signals for mobility and fog load, and (iii) issues control actions to the physical layer, including target fog selection for offloading, proactive handoff triggering, and task migration between fog nodes. The DT layer exposes bidirectional coupling between User Twins and Fog Twins, allowing mobility tendencies to be evaluated jointly with service feasibility. Finally, the LFO forwards compact summaries such as predicted next association, confidence, and predicted fog load indicators, upward to the GFO to enable cross-domain coordination and cloud-assisted decisions when users move beyond a local LFO’s coverage.

3.2 Digital Twin Construction and State Synchronization

In the proposed hierarchy given in Fig. 1, the Fog Orchestrator (FO) is realized by (i) a set of LFOs deployed on selected fog nodes, and (ii) a GFO deployed in the cloud. Each LFO manages the digital twins within its domain and executes real-time control. In the digital twin model, each physical entity has a corresponding twin. For each user device ui𝒰, the LFO maintains a user twin u~i with state vector

Siu(t) = (Li(t),vi(t),tli(t),ai(t),RSSIi(t),SNRi(t)),(1)

where Li(t)R2: the geographical location of user ui at time t, represented as a two-dimensional coordinate. vi(t)R2 is the velocity vector of user ui at time t, representing both speed and direction of movement, tli(t) is the current task load at user ui, representing the amount of computation pending execution or offloading, ai(t){cloud} is the the serving association of user ui at time t, indicating the fog node or cloud instance currently responsible for processing the user’s tasks, RSSIi(t) is the received signal strength indicator observed by user ui from its serving node at time t, and SNRi(t) is the signal-to-noise ratio experienced by user ui at time t. Similarly, for each fog node fj, the LFO maintains a fog twin f~j with state vector

Sjf(t) = (ρj(t),Mj(t),Bj(t),Qj(t),Pj(t)),(2)

where ρj(t)[0,1) is the CPU utilization of fog node fj at time t, Mj(t) is the available memory at fog node fj at time t, Bj(t) is the available network bandwidth at fog node fj for serving connected users at time t, Qj(t) is the queue length at fog node fj at time t, representing the number of tasks waiting for execution and directly influencing waiting delay, Pj(t) is the power consumption of fog node fj at time t, measured in watts. The state synchronization of both twins is performed in discrete time with period δ; thus, at each time t, the LFO holds the mirrored states of the users and fog nodes in its domain.

3.3 GRAIN-Based Mobility and Next-Fog Association Prediction

At each decision epoch t, the Local Fog Orchestrator (LFO) executes the following sequence. First, it forms the user-twin input state Siu(t) from synchronized mobility and link-quality features given in Eq. (1). Second, these inputs are smoothed by EWMA via Eq. (3) with factor αx=0.3 to reduce short-term wireless and mobility noise. Third, the most recent h=10 smoothed states are processed by a one-layer GRU encoder with hidden dimension dh=64 to obtain the temporal embedding hi(t), given in Eq. (5). Fourth, the model predicts (i) a probability distribution over candidate fog nodes pi(k)(t+Δ) using a softmax output layer in Eq. (6) and (ii) the future user location L^i(t+Δ) given in Eq. (7) over a lookahead horizon of Δ=30 steps. Fifth, the normalized entropy of the predicted association distribution is converted into a confidence score ci(t+Δ), as given in Eq. (8); proactive actions are allowed only when ci(t+Δ)cmin with cmin=0.6. In parallel, the LFO forecasts fog-side utilization and queue state using EWMA with αz=0.4, given in Eq. (9). Finally, the next serving fog is selected by minimizing the composite mobility-load score Γi,k(t+Δ), as presented in Eq. (10), where the fusion weights are set to βρ=1.0 and βQ=0.5, and Q^ is normalized by Qmax=50.

Mobility and fog-service dynamics are modelled in discrete time. At time t, user i is associated with a serving node ai(t){cloud} and has a candidate set 𝒩i(t). The Local Fog Orchestrator (LFO) runs GRAIN, which combines (i) EWMA smoothing for noisy mobility/link observations and (ii) a GRU encoder for learning short-term temporal dependencies. In parallel, the LFO applies EWMA over fog-twin states to forecast near-future load indicators.

For each user i, the LFO constructs the input state vector by using synchronized states values from user-twin Siu(t) as per Eq. (1). These features jointly characterize both physical movement and communication quality, forming the raw input to the temporal learning module. To suppress short-term fluctuations, each input dimension is smoothed using EWMA, as given in Eq. (3).

S~iu(t)=αxSiu(t)+(1αx)Su~i(t1),0<αx1.(3)

where S~iu(t) denotes the EWMA-smoothed state vector, and α is the smoothing factor which controls the trade-off between responsiveness and stability. The model assumed a fixed value for α to be 0.3, as it provides a favourable trade-off between responsiveness and noise suppression. Generally, a smaller αx increases smoothing but introduces lag in handoff and migration decisions, whereas a larger αx reacts quickly but transmits more measurement noise to the GRU input. Section 6.2 analyzes how αx and αz for fog-load EWMA impacts end-to-end QoS and migration behavior.

At time t, the model does not rely only on the latest EWMA-smoothed state Su~i(t). Instead, it forms a history window of the most recent h smoothed states as shown in Eq. (4).

S~iu(th+1), S~iu(th+2), , S~iu(t).(4)

This window captures the recent evolution of the user’s mobility and link conditions, including changes in location and speed, as well as temporal variations in wireless quality. Hence, the GRU operates on a short trajectory segment together with its corresponding link-quality evolution, rather than a single instantaneous snapshot. Let h denote the history length. The GRU encoder produces a latent embedding as given in Eq. (5).

hi(t)=ϕθ(S~iu(th+1:t)),(5)

where ϕθ() denotes a GRU-based sequence encoder which takes the last h EWMA-smoothed input vectors x~i(th+1:t) and compresses them into a fixed-length representation. The parameters θ are the learnable weights of the GRU that are trained from data.

Given hi(t), the model predicts a probability distribution pi(k) over candidate fog nodes to predict the most likely next association of the user by obtaining a comparable score across candidates, presented in Eq. (6).

pi(k)(t+Δ)=Pr (ai(t+Δ)=fk|hi(t))=softmaxk𝒩i(t) (Wohi(t)+bo),(6)

where ai(t+Δ) is the random serving-node association of user i at the future time t+Δ, fk denotes the k-th candidate fog node in the feasible set 𝒩i(t), pi(k)(t+Δ) is the predicted probability that the future association equals fk, i.e., ai(t+Δ)=fk, conditioned on the current temporal embedding hi(t), the term Wohi(t)+bo is a linear output layer that maps the embedding hi(t) into a vector of unnormalized scores (logits), one score per candidate fog node. Finally, softmax() converts these logits into probabilities over the candidate set. In addition, the model predicts the Δ-step-ahead location, given in Eq. (7).

L^i(t+Δ)=ψη(hi(t)),(7)

where ψη() is a regression head with parameters η.

The GRU output in Eq. (6) provides a probability distribution over candidate fog nodes, but the distribution may give high certainty or high uncertainty. Therefore, a scalar confidence score is computed to quantify how reliable the predicted association tendency is. This confidence is useful for (i) deciding whether to trigger proactive handoff or migration, (ii) avoiding unnecessary switching when the prediction is ambiguous, and (iii) enabling risk-aware coordination, e.g., forwarding only high-confidence events to the GFO. Let pi(t+Δ)={pi(k)(t+Δ)}fk𝒩i(t) denote the probability vector. A highly peaked distribution yields low entropy, indicating strong certainty, whereas a near-uniform distribution yields high entropy, indicating ambiguity. To make entropy comparable across different candidate-set sizes, it is normalised by the maximum possible entropy log|𝒩i(t)|. The resulting confidence ci(t+Δ) is defined in Eq. (8).

ci(t+Δ)=1H(pi(t+Δ))log|𝒩i(t)|,H(p)=fk𝒩i(t)p(k)logp(k).(8)

The confidence score ci(t+Δ) in Eq. (8) lies in [0,1] and indicates how strongly the predictor favors a single next fog node. The values close to 1 mean the predicted next-fog distribution is very peaked, while smaller values mean the prediction is more spread out and uncertain. Let pmax(t+Δ)maxfk𝒩i(t)pi(k)(t+Δ) be the largest predicted probability among the candidate fog nodes. In the LFO, proactive handoff preparation and task migration are triggered only when the prediction is sufficiently confident, i.e., when ci(t+Δ)cmin as in Eq. (24). The value of cmin is set as 0.6 as a conservative operating point, so the control plane reacts only when the prediction is strongly concentrated.

After estimating the user’s mobility-driven association tendency and its confidence, the LFO must also ensure that the selected fog node can actually serve the user efficiently. A fog node that is likely to be the next association, i.e., high pi(k) may still be a poor choice if it is expected to be congested in the near future. Hence, the LFO forecasts near-future fog load indicators from synchronized fog-twin states. These forecasts are later fused with the mobility tendency to make a load-aware next-fog decision. Therefore, for any scalar fog metric zj(t) like utilization ρj(t) or queue length Qj(t), the LFO maintains an EWMA state z^j(t) and updates the one-step-ahead forecast as given in Eq. (9).

z^j(t+1) = αzzj(t)+(1αz)z^j(t),αz(0,1],(9)

here, αz controls the responsiveness of the forecast where larger αz reacts faster to sudden load changes, while smaller αz provides stronger smoothing and greater stability.

So, up to this point, the LFO has obtained two complementary signals: (i) a mobility-driven association tendency {pi(k)(t+Δ)} from the GRU, as given in Eq. (6), indicating which fog node the user is most likely to move toward, and (ii) near-future congestion forecasts such as ρ^k(t+Δ) and Q^k(t+Δ) from fog-level EWMA, as given in Eq. (9), indicating which fog nodes are expected to be heavily loaded. The goal now is to combine these two signals to make a final, load-aware next-fog decision that avoids both unnecessary handoffs and congested nodes. Hence, for each candidate fk𝒩i(t), the LFO defines the mobility-load score Γi,k as given in Eq. (10). Fig. 2 represents the proactive handoff and task migration with digital twin.

Γi,k(t+Δ) = logpi(k)(t+Δ) + βρρ^k(t+Δ) + βQQ^k(t+Δ),(10)

where logpi(k)(t+Δ) penalizes candidates that are unlikely according to the GRU. If pi(k) is high (mobility-preferred), then logpi(k) is small; if pi(k) is low, the penalty becomes large. Thus, logpi(k)(t+Δ) encourages selecting fog nodes that match the predicted user movement. βρρ^k(t+Δ) discourages selecting a node predicted to have high utilization. Since queueing delay grows rapidly as utilization approaches 1, this term supports delay-aware decisions. Moreover, βQQ^k(t+Δ) discourages selecting nodes with predicted queue build-up, capturing short-term congestion that may not be fully reflected by utilization alone. Here, the weights βρ,βQ0 control the trade-off between following mobility preference and avoiding congestion. Finally, the next serving fog node is chosen by minimizing the composite score:

nextFog^(i,t+Δ)=argminfk𝒩i(t)Γi,k(t+Δ).(11)

images

Figure 2: Proactive handoff and task migration under predicted mobility.

This rule selects the candidate that is simultaneously (i) likely from the mobility perspective and (ii) predicted to be less congested from the load perspective.

Finally, the predicted future location L^i(t+Δ) from Eq. (7) is used for a lightweight feasibility check. The LFO verifies that the selected node nextFog^(i,t+Δ) provides coverage at L^i(t+Δ). If the coverage constraint is violated, the LFO falls back to the best-scoring candidate in 𝒩i(t) that is feasible under the predicted location.

3.4 Mobility-Aware Handoff & Proactive Task Migration Control Logic

The digital-twin layer acts as a cognitive control plane for the fog system by continuously synchronizing user and fog states and enabling proactive decisions. In particular, the previous section (Section 3.3) provides (i) the predicted future location L^i(t+Δ) and (ii) the predicted next serving fog nextFog^(i,t+Δ) with confidence. This subsection formalizes how these predictions are used to (i) detect an impending handoff and (ii) trigger task migration when a task is unlikely to finish before the user leaves the current fog coverage.

Let ai(t)=fj denote the current serving fog node for user i at time t, where fj. The user-fog distance is given in Eq. (12).

d(i,j,t)=Li(t)Lj,(12)

where Lj is the fixed location of fog node fj. The wireless link to fj is feasible only when the user is within the coverage radius Rj. The communication latency is also modelled between user i and fog node fj as given in Eq. (13).

i,j(t)={i,jfinite(t),if d(i,j,t)Rj,,if d(i,j,t)>Rj,(13)

where i,jfinite(t) is the finite latency when coverage holds. Using the mobility forecast L^i(t+Δ), the LFO checks whether the current serving fog remains feasible at the lookahead horizon, presented by Eq. (14).

d(i,j,t+Δ)=L^i(t+Δ)Lj>Rj    handoff required within [t,t+Δ].(14)

When Eq. (14) holds, the current link to fj is predicted to become infeasible, i.e., i,j(t+Δ)=, and the system prepares to hand off to the predicted next fog node nextFog^(i,t+Δ).

Similarly, for proactive task migration, consider a task tk currently executing on the serving fog fj=ai(t). Let T^k,jrem(t) denote the predicted remaining execution time of task tk on fj at time t, and let τ^i(j)(t) denote the predicted remaining dwell time of user i within the coverage of fj. A proactive migration of task tk from fj to the predicted next fog f, where f=nextFog^(i,t+Δ), is triggered as per Eq. (15), ensuring that migration is initiated only when the task is unlikely to finish before the user leaves the current fog coverage, thereby reducing forced restarts and deadline misses under mobility.

mk,j(t)=1    (nextFog^(i,t+Δ)=f  T^k,jrem(t)>τ^i(j)(t)Tjmig),(15)

where Tjmig is the migration overhead.

3.5 Algorithm

This section presents the Algorithm 1 of the proposed model. Lines 1–2, the algorithm begins a new control epoch t and immediately synchronizes the digital twins (Line 2). This step ensures the LFO’s decisions are based on the latest mirrored user and fog states, which is essential because both mobility and load can change significantly between epochs. In Lines 3–5, the LFO updates near-future fog congestion estimates for each fog node using the EWMA forecasting rule in Eq. (9). This produces lightweight forecasts that approximate the level of busyness each fog node will experience over the next decision horizon. These forecasts are required because choosing the “mobility-preferred” fog alone can lead to high queueing delay when the preferred node is about to become congested. Further, in Lines 6–10, for each user, the LFO first constructs the feasible candidate set 𝒩i(t) using coverage at the current time (Line 7). If 𝒩i(t) is empty in Lines 8–10, the algorithm assigns the user to the cloud as a safe fallback, because no local fog can currently serve the user. Next in Lines 11–16, the LFO constructs the instantaneous user feature vector (Line 11; Eq. (1)) and applies EWMA smoothing (Line 12; Eq. (3)) to suppress short-term fluctuations in mobility/link measurements. The smoothed vector is appended to a rolling history buffer (Line 13). If the history length is still shorter than h (Lines 14–16), the GRU encoder cannot yet form a reliable temporal embedding, so the algorithm conservatively keeps the current association until sufficient history is accumulated. In lines 17–20, the LFO computes the GRU embedding hi(t) (Line 17; Eq. (5)), then uses it to predict (i) the association tendency distribution over candidates (Line 18; Eq. (6)) and (ii) the future user location L^i(t+Δ) (Line 19; Eq. (7)). Finally, it computes the confidence score (Line 20; Eq. (8)). This confidence quantifies whether the predicted association tendency is sharply concentrated, i.e., high certainty or diffuse, i.e., high uncertainty, which is critical for controlling when proactive actions are triggered. In Lines 21–25, for each candidate fog node, the LFO computes the composite mobility-load score (Lines 21–23; Eq. (10)), which penalizes unlikely mobility choices while also penalizing predicted congestion. The next fog is selected as the candidate with the minimum composite score (Line 24; Eq. (11)). The algorithm then performs a feasibility check against the predicted future location (Line 25), i.e., if the chosen fog does not cover L^i(t+Δ), the algorithm falls back to the best-scoring feasible candidate. The LFO checks whether a handoff is likely within the lookahead horizon using the coverage-loss condition in Eq. (14) (Line 26) and gates the action by confidence (ci(t+Δ)cmin). If both conditions hold, the LFO triggers proactive handoff preparation (Line 27) toward the selected nextFog^. In Line 29–33, for each ongoing task currently executing on the present fog, the LFO checks whether the migration condition holds (Lines 30–31; Eq. (15)). If so, it triggers migration to the predicted next fog (Line 31). This step directly supports service continuity: tasks are migrated only when they are unlikely to complete before the user leaves the current fog’s coverage, and only toward the LFO-selected next fog. Finally, in lines 35–36, at the end of the epoch, the LFO may send compact summaries to the GFO, enabling higher-level coordination across domains. The flow chart is also presented via Fig. 3.

images

images

Figure 3: Work flow of the proposed model.

3.6 Fog Execution Delay and Energy Model

Each fog node fj is modeled as an M/G/1 queue to capture load-dependent waiting delay under heterogeneous task sizes. Let λj denote the aggregate task arrival rate to fj (tasks/s) during a control epoch, and let Cj be the processing capacity of fj (CPU cycles/s). For a task indexed by ν with workload Wν CPU cycles, the service time on fj is defined as,

Tν,jsvc = WνCj.(16)

Let Tjsvc be the generic service-time random variable at node j induced by the workload distribution of tasks assigned to fj. The utilization of node j is presented

ρj = λjE[Tjsvc],0ρj<1,(17)

where ρj is dimensionless. The stability condition ρj<1 is critical: as ρj1, the expected queueing delay grows rapidly, reflecting congestion and increased deadline misses.

Using M/G/1 allows the scheduler to account for both (i) average load through ρj and (ii) workload variability through the second moment E[(Tjsvc)2]. Variability matters in fog systems because a mix of small and large tasks can lead to long waiting times even when the mean load is moderate.

By the Pollaczek–Khinchine formula, the mean response time, which includes waiting and service time at node j is presented as,

E[Tjresp]=E[Tjsvc]+λjE[(Tjsvc)2]2(1ρj).(18)

The first term is the mean processing time, while the second term is the mean waiting-time component that increases with arrival rate λj, service-time variability, and proximity to saturation (ρj1).

For a specific task ν, its execution time is approximately fj as its own processing time plus the node-level mean queueing term,

Tν,jexecTν,jsvc+λjE[(Tjsvc)2]2(1ρj).(19)

This approximation is computationally efficient for online scheduling, where the scheduler can evaluate multiple candidate fog nodes using the current DT-estimated load (λj,ρj) while still respecting that each task has its own workload Wν.

Further, the energy is computed for executing task ν on fj is modeled as,

Eν,jexec = Pj(ρj) Tν,jexec,(20)

i.e., energy equals power times time. Here, Pj(ρj) is the average power draw of node j under utilization ρj. The model adopts a linear power-utilisation model, presented as,

Pj(ρj) = Pjidle + (PjpeakPjidle)ρj.(21)

This model captures a key fog characteristic, even when lightly loaded, edge servers consume non-trivial idle power, and energy increases with utilization. Together, Eqs. (19)(21) provide a tractable, load-aware cost model used later in the end-to-end latency, energy surrogates and in the scheduling objective.

4  Objective Function

The proposed DT-driven LFO operates in discrete time with synchronization period δ, meaning that at each epoch t it observes an updated virtual mirror of both the users and the fog nodes. This synchronized view provides (i) the current coverage context via user and fog locations and (ii) the current congestion context via fog utilization/queue states. However, under mobility and time-varying load, decisions based only on instantaneous observations can be short-sighted. For this reason, the LFO additionally leverages the GRAIN predictors from Section 3.3 where the future user location L^i(t+Δ), presented in Eq. (7) is captures and the user is expected to be at the look-ahead horizon, nextFog^(i,t+Δ), presented in Eq. (11) captures the most suitable next serving node after mobility-load fusion, and the confidence score ci(t+Δ), presented in Eq. (8), quantifies whether the association tendency estimate is reliable enough to justify proactive actions. Accordingly, the LFO’s control action at epoch t is expressed through two decision structures, i.e., the assignment matrix X(t) for newly-arrived tasks, and the migration tensor M(t) for ongoing tasks that may require relocation to preserve service continuity.

To tightly couple the optimization to the DT predictors while keeping the decision variables simple, a small set of binary feasibility indicators is pre-computed. The indicator χi,j(t) in Eq. (22) enforces the physical constraint that a task can be offloaded to a fog node only when the user is currently within its coverage. The indicator χi,j(t+Δ) in Eq. (23) extends this logic to the future by using the predicted location L^i(t+Δ), enabling the model to anticipate imminent coverage loss. Since proactive migration can be harmful when predictions are uncertain, the confidence gate is introduced gi(t+Δ) in Eq. (24), which permits proactive actions only when ci(t+Δ) exceeds a threshold. Finally, the selector πi,(t+Δ) in Eq. (25) ties migration destinations to the LFO’s predicted next fog, presented in Eq. (11), ensuring that migration decisions remain consistent with the mobility-load fusion policy rather than allowing arbitrary destination choices.

χi,j(t)[d(i,j,t)Rj],with d(i,j,t) from Eq. (12),(22)

χi,j(t+Δ)[L^i(t+Δ)LjRj],with L^i(t+Δ) from Eq. (7),(23)

gi(t+Δ)[ci(t+Δ)cmin],with ci(t+Δ) from Eq. (8),(24)

πi,(t+Δ)[f=nextFog^(i,t+Δ)],with nextFog^(i,t+Δ) from Eq. (11),(25)

where cmin[0,1] is a confidence threshold. In this study, cmin is kept static across simulation scenarios to keep comparisons reproducible and to isolate the contribution of uncertainty gating.

Given a placement decision, the end-to-end latency of a task consists of communication delay and execution delay given in Eq. (19). When a migration is triggered, an additional migration overhead is incurred, which is explicitly added in the latency surrogate Tk(t) in Eq. (26).

Tk(t)=j𝒱xk,j(t)(Tk,i(k)jcomm(t)+Tk,jexec(t))+fjfmk,j(t)Tjmig.(26)

Similarly, the energy surrogate Ek(t) in Eq. (27) accounts for both device-side transmission energy and fog-side execution energy (Eq. (20)).

Ek(t)=j𝒱xk,j(t)(Ek,i(k)tx(t)+Ek,jexec(t)).(27)

These surrogates provide a tractable way to compare candidate decisions while remaining aligned with the underlying physical and queueing models already defined earlier, thereby avoiding repeated definitions.

The primary QoS objective in mobile fog settings is task success, i.e., meeting deadlines despite mobility. This is captured by the failure indicator zk(t), which penalizes deadline misses and/or disconnections. At the same time, among feasible choices that meet deadlines, the LFO should prefer decisions that reduce user-perceived latency and avoid excessive energy expenditure. Therefore, the composite objective is adopted in Eq. (28), which trades off normalized latency T~k(t) and normalized energy E~k(t) (Eq. (29)) while heavily penalizing failures via the w3 term. The normalization uses task-specific budgets (Lkmax and Ei(k)max) so that heterogeneous tasks remain comparable without requiring global min–max statistics.

minX(t),M(t),{zk(t)}J(t)=w1k𝒦(t)T~k(t)+w2k𝒦(t)E~k(t)+w3k𝒦(t)zk(t),(28)

with normalization

T~k(t)=Tk(t)Lkmax,E~k(t)=Ek(t)Ei(k)max.(29)

5  Implementation in MobFogSim

This section implements the proposed DT-enhanced orchestration and scheduling pipeline in MobFogSim [39], a simulator designed for fog computing with mobile users and service migration. MobFogSim extends iFogSim to support user mobility traces, wireless handoff events, and module migration across fog nodes, making it suitable for evaluating the proposed mobility-aware offloading and proactive migration logic.

5.1 Simulation Settings

A multi-tier fog environment is implemented, inspired by a smart-city layout. The simulated domain contains ||=5 fog nodes and |𝒰|=20 mobile users, along with a cloud data center considered with high compute capacity, and higher WAN latency. Each fog node fj is configured with a compute capacity specified in MIPS by MobFogSim; this corresponds to the processing capacity Cj in our model up to a constant unit scaling, and task workloads configured in MI correspond to Wk in cycles. Each fog node is also assigned a wireless access radius, which implements the coverage constraint used in the candidate-set definition and in the feasibility indicators, as also indicated in Eqs. (22) and (23). Device-to-fog links are configured with realistic bandwidth ranges for in-range connectivity, while fog-to-cloud links incur higher latency to represent WAN traversal. Details are given in simulation settings in Table 4.

images

To drive user mobility, the Luxembourg SUMO Traffic (LuST) mobility dataset [40] is utilized. LuST is a representative outdoor smart-city mobility workload that provides realistic user routes and speed variations. In MobFogSim, each user’s location is updated at each simulation tick according to the trace, and the simulator generates handoff events when the user crosses fog coverage boundaries. However, evaluating additional mobility models and indoor traces is left for future work. Fig. 4 provides a visual reference for the real-world-derived LuST urban environment and the corresponding fog deployment topology considered in the experiments.

images

Figure 4: Visual representation of the evaluated environment. (a) Real-world-derived LuST urban road layout used for user mobility generation, with example trajectories/coverage regions annotated. (b) Labelled deployment topology used in MobFogSim, showing mobile users, fog nodes, wireless access points, and the cloud backend.

5.2 The Digital-Twin Control Plane in MobFogSim

MobFogSim does not contain any built-in “digital twin layer” module; therefore, it is implemented logically as a control module that emulates the LFO functionality in our architecture. Specifically, a TwinManager module is introduced that runs alongside the MobFogSim broker/controller and maintains the synchronized user-twin and fog-twin state structures as given in Eqs. (1) and (2). At each decision epoch t, TwinManager pulls the latest simulation state for all users and fog nodes in the domain, thereby producing the LFO-available mirrored states {Si(t)} and {Sjf(t)}. In this experiment, a single LFO domain is focused on; the GFO functionality is not explicitly simulated but is conceptually represented by the optional aggregation of domain summaries.

At each epoch t, TwinManager executes the same conceptual pipeline as Algorithm 1. First, it constructs the candidate set 𝒩i(t) for each user from coverage and identifies feasible fog nodes. Second, it runs the GRAIN mobility predictor that forms the user feature vector as per Eq. (1), applies EWMA smoothing as Eq. (3), and executes a GRU forward pass over the most recent h smoothed states as per Eq. (5) to obtain (i) the association tendency distribution over candidates given in Eq. (6) and (ii) the predicted future user location given in Eq. (7). Third, it computes the confidence score via normalized entropy given in Eq. (8). In parallel, it updates EWMA-based forecasts of fog-side load indicators using fog-twin states given in Eq. (9). Finally, it selects the next fog by mobility-load fusion given in Eqs. (10) and (11) and applies a feasibility check using the predicted location to avoid selecting a fog node that is unlikely to cover the user at the lookahead horizon. Since DT synchronization is emulated as a local state read in MobFogSim, the simulator does not explicitly capture mobile-device battery drain due to DT telemetry. The assignment matrix X(t) is realized by selecting the execution venue—fog node, cloud, or local device—for each new task upon arrival, i.e., setting the target fog device for the task’s module placement in MobFogSim. The migration tensor M(t) is realised by triggering module migration between fog nodes for ongoing services when the migration condition holds as given also in Eq. (15), approximating state transfer by the configured migration delay Tjmig. Further, task failures are detected from simulator logs when a task misses its deadline or is dropped due to loss of connectivity before completion. The GRU component is pre-trained offline using mobility traces and then used for inference during simulation runs. To avoid optimistic bias, the training data are separated from evaluation runs by using a time set in the simulation; the LFO performs inference only on the segment. During the simulation, the LFO performs inference only at each epoch.

MobFogSim provides a best-case DT deployment model in which the TwinManager reads the state locally from the simulator. As a result, synchronization delay is negligible in our reported experiments, and the DT update period equals the decision epoch, i.e., δ=1 s. In real deployments, non-zero telemetry and twin-update latency can reduce the effective lookahead of proactive migration. Proactive decisions are most beneficial when the DT update interval and communication delay are small relative to the user’s residence time in a fog coverage region.

5.3 Evaluation Metrics

The evaluation metrics are chosen to align directly with the objective components in Eq. (28) and to characterize the operational behavior of mobility handling. First, the average end-to-end delay is computed using the per-task latency surrogate in Eq. (26) and averaged across all tasks. This delay aggregates communication delay, load-dependent execution delay, and migration overhead. In addition to the mean, we report tail latency, e.g., 95th percentile, (e.g., the 95th percentile) to reflect reliability under congestion and mobility. Second, average energy is computed using the per-task energy surrogate in Eq. (27), aggregated over tasks and reported as energy per task. Third, the task success rate measures the fraction of tasks completed within the deadline without being dropped. This aligns with the failure indicator zk(t) used in Eq. (28) and is computed as SR=100%(11|𝒦|k𝒦zk).

Fourth, DT overhead quantifies the additional computation and communication costs introduced by the DT-enabled control loop. Therefore, (i) CPU utilization time attributable to EWMA updates, GRU inference, confidence computation, fog EWMA forecasting, and fusion-based decision making, and (ii) additional bytes transmitted for twin synchronization messages, are measured.

Finally, fifth, migration count is the total number of migrations triggered, and service migration rate is the fraction of tasks that undergo at least one migration. These metrics help identify whether a scheme is overly reactive, i.e., excessive migrations or insufficiently adaptive, i.e., too few migrations, and they contextualize delay and energy trade-offs.

6  Results and Discussion

The metrics in Section 5.3 are evaluated across scenarios with varying user speeds and task arrival rates to stress the system under different mobility and workload intensities. We compare three configurations: (i) TRAP [29], which performs multi-objective scheduling without explicit mobility handling; (ii) PROMO [17], which supports mobility-aware migration based on reactive/threshold triggers without DT-driven prediction and load-aware fusion; and (iii) The proposed DT-enhanced scheduler, which uses DT-synchronised states, GRAIN-based mobility prediction, confidence estimation, fog-load EWMA forecasting, and mobility-load fusion for next-fog selection. The reported values are averaged over 10 independent runs of a 30-minute simulation scenario with different random seeds.

Table 5 summarizes the objective-aligned metrics used in Eq. (28) that are mean delay through Tk(t) in Eq. (26), energy per task through Ek(t) in Eq. (27), and service reliability captured by the failure indicator zk(t) reported here as success rate =100%failure rate. DT overhead is reported alongside these objective terms to quantify the additional compute cost of the proposed DT control loop. Further, we also include statistical analysis to quantify run-to-run variability. Therefore, aggregate metrics are reported as mean ± standard deviation over 10 independent runs, and the corresponding 95% confidence interval for n=10 is computed as x¯±t0.975,9s/10, where t0.975,9=2.262.

images

Table 6 summarizes the variability of the observed improvements across low-, medium-, and high-load regimes. The proposed method shows consistently positive gains across all tested loads, with especially stable improvements for prediction-quality metrics such as MAE, RMSE, and MAPE, while metrics such as DMR exhibit larger variability due to their stronger sensitivity to congestion severity.

images

Relative to the static baseline TRAP, PROMO reduces the average task delay from 250.3 to 158.7 ms, corresponding to a 36.6% delay reduction. PROMO also improves reliability from 80.4% to 95.0% success rate; equivalently, the failure rate drops from 19.6% to 5.0%, i.e., a 74.5% reduction in failures, which directly reduces the kzk(t) penalty in Eq. (28). Energy per task decreases modestly from 1.50 to 1.42 J, means 5.3% reduction, consistent with fewer mobility-induced disruptions and reduced time spent under poor connectivity.

The proposed DT-enhanced scheduler yields further gains over PROMO by coupling GRAIN-based mobility prediction with fog-load forecasting and mobility-load fusion, explained in Eqs. (10) and (11). Specifically, the mean delay decreases from 158.7 to 139.6 ms, i.e., an additional 12.0% reduction vs. PROMO and a 44.2% reduction vs. TRAP. Energy per task decreases from 1.42 to 1.32 J, i.e., a 7.0% reduction vs. PROMO and a 12.0% reduction vs. TRAP. Reliability improves from 95.0% to 99.1% success rafavourablete, which is a +4.1 percentage-point increase. In terms of failures, the failure rate drops from 5.0% to 0.9%, i.e., an 82.0%, reduction in failures vs. PROMO and a 95.4% reduction vs. TRAP, from 19.6% to 0.9%. These improvements indicate that the DT-enhanced method not only migrates proactively but also avoids migrating users toward nodes predicted to become congested, thereby reducing queueing delay and deadline misses. These QoS improvements incur a modest DT compute overhead of 5.1% CPU. Importantly, this overhead reflects the added control-plane intelligence central to the proposed theme of twin synchronization, EWMA updates, GRU inference, confidence computation, fog EWMA forecasting, and fusion scoring. Given the observed 12.0% mean-delay reduction and 82.0% failure reduction relative to PROMO, the overhead represents a favorable trade-off in the tested setting.

The gains in Table 5 are not due to a single factor, but to the joint effect of prediction, load awareness, and uncertainty gating. TRAP is primarily reactive and does not explicitly anticipate user movement, so tasks may remain attached to a fog node even as the user moves away, increasing communication delays and the risk of deadline misses. PROMO improves on TRAP by anticipating mobility, but its decisions are driven primarily by movement tendencies and do not explicitly account for whether the target fog will remain lightly loaded over the decision horizon. In contrast, the proposed DT-enhanced method combines GRAIN-based association prediction with EWMA-based fog-load forecasting and the fusion score in Eqs. (10) and (11). This allows the controller to migrate not simply to the next likely fog, but to the next reliable and less congested fog.

The plots in Figs. 510 provide evidence beyond the mean values in Table 5. Collectively, they show that the proposed DT-enhanced method is productive in the sense that it (i) reduces not only average delay but also tThe ail delay, (ii) improves reliability under increasing load, (iii) improves the energy–delay operating point, and (iv) does so with bounded DT overhead and fewer migrations than a reactive migration baseline. In other words, the gains are not obtained by “over-migrating” or shifting cost from one metric to another; they come from earlier, more selective, and load-aware decisions.

images

Figure 5: Task delay behavior under TRAP, PROMO, and the proposed DT-enhanced model.

images

Figure 6: Success rate vs. task arrival rate.

images

Figure 7: Joint behavior of average task delay and energy per task.

images

Figure 8: Digital-twin overhead characteristics.

images

Figure 9: Illustrative handover case where a user moves from fog node f1 towards f2.

images

Figure 10: Migration behavior under TRAP, PROMO, and the proposed DT-enhanced scheduler.

Fig. 5a shows the empirical CDF of task delay. A curve that is further left indicates that a larger fraction of tasks finish within a smaller delay budget. The proposed DT-enhanced curve is consistently left-shifted relative to PROMO, and both are substantially left of TRAP, implying that DT-enhanced scheduling improves delay for most tasks, not only on average. Importantly, the right tail under TRAP is much heavier: the CDF approaches 1 onlynear 9001000 ms, indicating rare but severe latency spikes. In contrast, PROMO saturates near 350 ms, and the proposed DT-enhanced method saturates near 280300 ms, showing that the DT-enhanced policy significantly suppresses worst-case delays. Fig. 5b reinforces this interpretation: TRAP exhibits a higher median and a much wider spread with many high outliers, whereas PROMO reduces both the median and the interquartile range (IQR). The proposed DT-enhanced method has the lowest median and a tighter IQR, indicating more stable QoS in addition to lower mean delay. This is aligned with the objective term based on Tk(t) in Eq. (26), because reducing tail latency also reduces the probability of deadline violations.

Fig. 6 evaluates the success rate as the task arrival rate increases. The proposed DT-e,nhanced method consistently maintains the highest success rate, and the gap widens at higher load, which is where Congestion and mobility jointly increase deadline misses. Concretely, at High arrival rate, success improves from 70% in TRAP and 89% in PROMO to 96% in DT-enhanced, i.e., a +26 percentage-point gain over TRAP and a +7 percentage-point gain over PROMO. At Medium arrival rate, the proposed method achieves 99% vs. 95% for PROMO and 82% for TRAP. These results indicate that the DT-enhanced strategy improves the failure indicator term kzk(t) in Eq. (28), and that the improvement is robust to increased load rather than limited to a single operating point.

Fig. 7 summarizes whether delay improvements are achieved at the cost of higher energy. The proposed DT-enhanced method moves toward the bottom-left region, indicating a genuine improvement in the multi-objective sense, as in Eq. (28). Compared to PROMO, the proposed method reduces delay from 159 to 140 ms while also reducing energy per task from 1.42 to 1.32 J. Compared to TRAP, it reduces both delay and energy substantially. This confirms that the DT-enhanced approach does not merely trade energy for latency; it improves both, consistent with the intended mobility–load fusion behavior.

Fig. 8a quantifies the additional cost of the DT control loop at nominal load: 5.1% CPU overhead and 1.3% network overhead. Fig. 8b shows how this overhead scales with the number of users, where CPU overhead increases from 3.5% at 50 users to 6.8% at 200 users, while network overhead increases from 1.0% to 1.7%. Two points are important here. First, the absolute overhead remains modest, below 7% CPU and below 2% network in the tested range. Second, the growth is gradual, indicating that the control-plane cost scales reasonably with user population. Therefore, the DT layer appears practically viable because it introduces bounded overhead while enabling measurable gains in QoS and reliability.

The handover case study in Fig. 9 illustrates the causal mechanism behind the aggregate improvements. Under TRAP, the task remains anchored to f1 even as the user moves toward f2, resulting in a long completion time of 500 ms. PROMO mitigates this by migrating near the handoff boundary, reducing completion time to 220 ms, but the migration incurs a visible overhead component (the hatched segment). The proposed DT-enhanced method completes in 180 ms by anticipating the short dwell time near f1 and proactively transitioning service earlier, thereby avoiding or minimizing live migration overhead. Relative to PROMO, this is an approximate 18% reduction in the case-study delay, i.e., 220180 ms, and relative to TRAP, it is an approximate 64% reduction from 500180 ms.

Fig. 10a shows average migrations per user where PROMO performs 3.2 migrations/user, while the proposed DT-enhanced method performs 2.7 migrations/user, which is 15.6% reduction, and TRAP performs none because it does not migrate. Fig. 10b shows the service migration rate where PROMO migrates 40% of services/tasks, while the proposed DT-enhanced method migrates 34%, which is a 15% reduction. These results are significant because they rule out a common “false win” in mobility studies: achieving lower delay simply by migrating more aggressively. The proposed method achieves lower delay and higher success rate with fewer migrations than PROMO, indicating more selective and better-timed mobility handling.

6.1 Ablation Study and Component Contribution Analysis

To quantify the contribution of individual components in the proposed framework, we conducted an ablation study by selectively removing key modules. Unlike TRAP and PROMO, which are full external baseline schedulers, the rows in Table 7 are internal ablation variants of the proposed method obtained by disabling one module at a time.

images

The results indicate that the digital twin and confidence-gated decision mechanism contribute most significantly to reliability improvements, while GRU-based mobility prediction primarily reduces latency. This confirms that the proposed framework’s performance gains arise from the synergistic interaction of multiple components rather than a single algorithmic improvement.

6.2 Sensitivity of EWMA Hyperparameters

The EWMA factors αx which is a user-feature smoothing as given in Eq. (3) and αz which is a fog-metric forecasting in Eq. (9) control the robustness trade-off in the GRAIN loop. Since, proactive handoff or migration depends on both the association tendency pi(t+Δ) and the predicted fog metrics (ρ^j,Q^j), α directly impacts (i) the trigger time of proactive actions, (ii) stability of the selected next fog, and (iii) migration churn.

With decision epoch δ, EWMA weights decay as (1α)k, giving an effective memory Seff2α1 samples and a time constant τ=δ/ln(1α). The consideration of small value for α suppresses noise but increases lag in decisions, whereas large α value reacts quickly but can propagate signal noise ratio and queue jitter. Fig. 11 illustrates this effect via the EWMA step response.

images

Figure 11: EWMA step response for different smoothing factors α. The dashed line marks the 11/e0.632 level used to visualize the time constant.

6.3 Complexity Analysis

Proactive mobility-aware schemes such as TRAP and PROMO achieve gains only when near-future mobility predictions are reliable. In a worst-case setting where the prediction input becomes intentionally incorrect or highly volatile, proactive control can trigger unnecessary migrations, increasing churn and overhead.

In our proposed framework, unreliable predictions typically produce a more spread-out next-fog probability distribution, which increases the normalized entropy and lowers the confidence score. Therefore, the entropy-based confidence gate suppresses proactive handoff preparation and proactive migrations when predictions are unreliable. Hence, in the worst case, the system degrades conservatively toward baseline reactive behaviour rather than amplifying oscillations. A full adversarial evaluation is beyond the scope of the present simulation and is left for future work.

Further, computational and communication complexity is analyzed. Let |𝒩i(t)| be the number of nearby candidate fog nodes for user i at epoch t, and let K be the number of tasks considered for migration. At each epoch, the controller performs: (i) one forward pass of the GRAIN predictor, (ii) entropy computation over |𝒩i(t)| candidates, and (iii) migration-rule evaluation for each task across these candidates. Thus, the online time complexity per epoch is

𝒪(CGRU+|𝒩i(t)|+K|𝒩i(t)|)=𝒪(CGRU+(K+1)|𝒩i(t)|),(30)

where CGRU is the fixed cost of a GRU forward pass for a configured hidden size and sequence length. In a real-world scenario, |𝒩i(t)| is small because it is limited to coverage neighbours, keeping the decision overhead lightweight. Furthermore, the communication overhead is dominated by periodic telemetry updates. With per-user update interval δ, the uplink reporting cost scales as 𝒪(U/δ) for U active users.

6.4 Scalibility Analysis

To evaluate the scalability of the proposed framework, the simulation was extended to include large-scale scenarios with varying numbers of mobile users and fog nodes. Scalability is a basic requirement of fog and edge computing systems, as it is anticipated that a considerable number of heterogeneous devices will be involved in a fog computing system, where a wide variety of mobility patterns will be encountered [10,11]. As expected, with increased system scale, all suggested approaches will exhibit higher latency due to increased competition for fog resources and communication overhead, as reported in previous studies on large-scale edge computing systems [41]. At the same time, it was observed that the suggested framework continues to achieve a lower average task delay than the TRAP and PROMO approaches across all simulation scenarios. This shows that the suggested framework, integrating mobility prediction and load-aware scheduling, can achieve efficient resource allocation as workload and mobility levels increase. In addition, the proposed framework’s task success rate remains higher than that of other approaches, particularly in large-scale scenarios. This observation is consistent with recent studies reporting that predictive and mobility-aware orchestration can significantly improve task reliability in edge and fog computing systems [41,42]. In terms of overhead, the digital twin control loop incurs additional computational overhead that scales with system size. However, it was observed that the increase in overhead with system scale is gradual, indicating that digital twin-based orchestration can be implemented with reasonable control plane overhead, as reported in recent studies on digital twin-based fog computing systems [12,13]. It was observed that the proposed framework maintains superior performance in terms of quality of service and feasibility as the system scale increases, indicating its applicability to large-scale fog computing systems, as shown in Table 8.

images

The reported results should be interpreted in light of several experimental assumptions. First, the study is simulation-based in MobFogSim and therefore inherits the simulator’s abstraction level for wireless links, processing delays, and migration costs, rather than a real hardware deployment. Second, user mobility is driven only by the Luxembourg SUMO Traffic (LuST) dataset, which is representative of outdoor urban vehicular movement but does not cover indoor, pedestrian, or dense industrial IoT mobility regimes. Third, the DT control plane is emulated as a co-located TwinManager that reads simulator state locally, corresponding to a best-case edge deployment; thus, mobile-device battery drain and large non-zero synchronization delays are not explicitly measured in the reported runs. Finally, the experiments focus on a single LFO domain and do not evaluate inter-domain coordination. These limitations may affect the generalizability of the absolute performance values, but the relative comparison among TRAP, PROMO, and the proposed method remains meaningful because all schemes are evaluated under the same controlled conditions.

7  Conclusion

The present paper proposes a digital twin–based, uncertainty-aware framework for proactive fog scheduling under dynamic user mobility and fog congestion. By integrating confidence-gated mobility prediction with load-aware decision-making, the framework enables anticipatory handoff and task migration while avoiding unnecessary control actions. Empirical results demonstrate that the proposed approach consistently outperforms baseline methods, achieving lower latency and energy consumption, higher service reliability, and only modest digital-twin overhead. These findings confirm that uncertainty-aware digital twin orchestration is not merely an incremental enhancement but a critical mechanism for ensuring robust service continuity in mobile fog environments. Overall, this work establishes a principled foundation for next-generation proactive edge and fog control, paving the way for scalable, intelligent, and reliability-aware orchestration in highly dynamic edge systems.

Acknowledgement: The authors extend their appreciation to the Deanship of Scientific Research at Northern Border University, Arar, Saudi Arabia, for funding this research work.

Funding Statement: This research was funded by the Deanship of Scientific Research at Northern Border University. Arar, Saudi Arabia, under grant number NBU-FFR-2026-451-5.

Author Contributions: Navjeet Kaur conceptualizes, investigates, and writes the original draft. Ayush Mittal validates and implements the proposed work. Saad Alahmari provided project supervision, methodological guidance, and critical manuscript revision. All authors reviewed and approved the final version of the manuscript.

Availability of Data and Materials: The data and materials used in this study are available from the corresponding author upon request.

Ethics Approval: This study did not involve human participants, human data, or animals. Therefore, ethics approval was not required.

Conflicts of Interest: The authors declare no conflicts of interest.

Appendix A Notation Table

images

References

1. Joseph AG, Gokhale MM, Mani JMJ, Veni T. Mobility aware computation offloading in fog devices using virtual machine migration. In: 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT). Piscataway, NJ, USA: IEEE; 2023. p. 1–6. [Google Scholar]

2. Araújo MC, Bittencourt LF. CMFogV: proactive content migration for multi-level fog computing. Pervasive Mob Comput. 2024;102:101933. [Google Scholar]

3. Li Z, Mei X, Sun Z, Xu J, Zhang J, Zhang D, et al. A reference framework for the digital twin smart factory based on cloud-fog-edge computing collaboration. J Intell Manuf. 2025;36(5):3625–45. doi:10.1007/s10845-024-02424-0. [Google Scholar] [CrossRef]

4. Hayawi K, Sajid J, Malik AW, Mathew SS. Digital twin assisted task offloading for workload management at fog nodes. IEEE Internet Things J. 2025;12(13):23061–72. doi:10.1109/jiot.2025.3550832. [Google Scholar] [CrossRef]

5. Liang Y, Li G, Zhang G, Guo J, Liu Q, Zheng J, et al. Latency reduction in immersive systems through request scheduling with digital twin networks in collaborative edge computing. ACM Trans Sens Netw. 2024;8:12679. doi:10.1145/3701562. [Google Scholar] [CrossRef]

6. El-Khatib RF, Elsayed SA, Zorba N, Hassanein HS. Proactive task allocation in extreme edge computing for digital twin services. IEEE Internet Things J. 2025;12(10):14051–66. doi:10.1109/jiot.2024.3524868. [Google Scholar] [CrossRef]

7. Zhao Z, Wang Y, Xie X. Advancing traffic resource scheduling with cloud-edge collaboration: a virtualized digital twin perspective. IEEE Internet Things J. 2025;12(19):39625–39. doi:10.1109/jiot.2025.3588153. [Google Scholar] [CrossRef]

8. Jeremiah SR, Yang LT, Park JH. Digital twin-assisted resource allocation framework based on edge collaboration for vehicular edge computing. Future Gener Comput Syst. 2024;150(3):243–54. doi:10.1016/j.future.2023.09.001. [Google Scholar] [CrossRef]

9. Wang Y, Su Z, Guo S, Dai M, Luan TH, Liu Y. A survey on digital twins: architecture, enabling technologies, security and privacy, and future prospects. IEEE Internet Things J. 2023;10(17):14965–87. [Google Scholar]

10. Xie B, Cui H. Deep reinforcement learning-based dynamical task offloading for mobile edge computing. J Supercomput. 2025;81(1):35. doi:10.1007/s11227-024-06603-x. [Google Scholar] [CrossRef]

11. Sheng S, Chen P, Chen Z, Wu L, Yao Y. Deep reinforcement learning-based task scheduling in IoT edge computing. Sensors. 2021;21(5):1666. doi:10.3390/s21051666. [Google Scholar] [PubMed] [CrossRef]

12. Chen X, Cao J, Liang Z, Sahni Y, Zhang M. Digital twin-assisted reinforcement learning for resource-aware microservice offloading in edge computing. In: 2023 IEEE 20th International Conference on Mobile Ad Hoc and Smart Systems (MASS). Piscataway, NJ, USA: IEEE; 2023. p. 28–36. [Google Scholar]

13. Zhou X, Peng Y, Lan T, Zhang Z, Tang B, Guan X. Digital twin empowered task offloading for mobile edge computing in 6G internet of vehicles. IEEE Internet Things J. 2025;12(15):29189–202. doi:10.1109/jiot.2025.3575466. [Google Scholar] [CrossRef]

14. Hu Z, Tu J, Li B. SPEAR: optimized dependency-aware task scheduling with deep reinforcement learning. In: Proceedings of the 2019 39th IEEE International Conference on Distributed Computing Systems. Piscataway, NJ, USA: IEEE; 2019. p. 2037–46. [Google Scholar]

15. Zhou C, Wu W, He H, Yang P, Lyu F, Zhang N, et al. Deep reinforcement learning for delay-oriented IoT task scheduling in SAGIN. IEEE Trans Wireless Commun. 2021;20(2):911–25. doi:10.1109/twc.2020.3029143. [Google Scholar] [CrossRef]

16. Taleb T, Ksentini A, Frangoudis PA. Follow-Me cloud: when cloud services follow mobile users. EEE Trans Cloud Comput. 2019;7(2):369–82. [Google Scholar]

17. Kaur N, Kumar A, Kumar R. PROMO: PROactive mObility-support model for task scheduling in fog computing. Int J Comput Appl. 2022;44(11):1092–101. [Google Scholar]

18. Bozkaya E. Digital twin-assisted and mobility-aware service migration in mobile edge computing. Comput Netw. 2023;231(7):109798. doi:10.1016/j.comnet.2023.109798. [Google Scholar] [CrossRef]

19. Sun W, Zhang H, Wang R, Zhang Y, Shen X, Dai H. Reducing offloading latency for digital twin edge networks in 6G. IEEE Trans Veh Technol. 2020;69(10):12240–51. doi:10.1109/tvt.2020.3018817. [Google Scholar] [CrossRef]

20. Wang X, Ma L, Li H, Yin Z, Luan TH, Cheng N. Digital twin-assisted efficient reinforcement learning for edge task scheduling. arXiv:2208.01781. 2022. [Google Scholar]

21. Ogunseyi TB, Thiyagarajan G. An explainable LSTM-based intrusion detection system optimized by firefly algorithm for IoT networks. Sensors. 2025;25(7):2288. doi:10.3390/s25072288. [Google Scholar] [PubMed] [CrossRef]

22. Lazzarini R, Tianfield H, Charissis V. Federated learning for IoT intrusion detection. AI. 2023;4(3):509–30. doi:10.3390/ai4030028. [Google Scholar] [CrossRef]

23. Mao Y, Zhang J, Letaief KB. Joint task offloading scheduling and transmit power allocation for mobile-edge computing systems. In: Proceedings of the 2017 IEEE Wireless Communications and Networking Conference. Piscataway, NJ, USA: IEEE; 2017. p. 1–6. [Google Scholar]

24. Luo J, Song Q, Guo F, Wu H, Som HM, Alahmari S, et al. Joint deep reinforcement learning strategy in MEC for smart internet of vehicles edge computing networks. Sustain Comput Inform Syst. 2025;46(4):101121. doi:10.1016/j.suscom.2025.101121. [Google Scholar] [CrossRef]

25. Huang H, Ye Q, Zhou Y. Deadline-aware task offloading with partially-observable deep reinforcement learning for MEC. IEEE Trans Netw Sci Eng. 2021;9(6):3870–85. doi:10.1109/tnse.2021.3115054. [Google Scholar] [CrossRef]

26. Zhu C, Pastor G, Xiao Y, Liu C, Fu S. Fog following me: latency and quality balanced task allocation in vehicular fog computing. In: Proceedings of the 2018 15th Annual IEEE International Conference on Sensing, Communication and Networking. Piscataway, NJ, USA: IEEE; 2018. p. 1–9. [Google Scholar]

27. Maleki EF, Mashayekhy L. Mobility-aware computation offloading in edge computing using prediction. In: Proceedings of the 2020 IEEE 4th International Conference on Fog and Edge Computing (ICFEC). Piscataway, NJ, USA: IEEE; 2020. p. 69–74. [Google Scholar]

28. Ngo MV, Luo TK, Huynh HT, Nguyen HT, Dutkiewicz E. Coordinated container migration and base station handover in mobile edge computing. In: Proceedings of the 2020 IEEE Global Communications Conference. Piscataway, NJ, USA: IEEE; 2020. p. 1–6. [Google Scholar]

29. Kaur N, Kumar A, Kumar R. TRAP: task-resource adaptive pairing for efficient scheduling in fog computing. Clust Comput. 2022;25(6):4257–73. doi:10.1007/s10586-022-03641-z. [Google Scholar] [CrossRef]

30. Liu W, Li C, Zheng A, Zheng Z, Zhang Z, Xiao Y. Fog computing resource-scheduling strategy in IoT based on artificial bee colony algorithm. Electronics. 2023;12(7):1511. doi:10.3390/electronics12071511. [Google Scholar] [CrossRef]

31. Tao F, Zhang H, Liu A, Nee AYC. Digital twin in industry: state-of-the-art. IEEE Trans Ind Inf. 2019;15(4):2405–15. doi:10.1109/tii.2018.2873186. [Google Scholar] [CrossRef]

32. Liu M, Fang S, Dong H, Xu C. Review of digital twin about concepts, technologies, and industrial applications. J Manuf Syst. 2021;58(Pt B):346–61. doi:10.1016/j.jmsy.2020.06.017. [Google Scholar] [CrossRef]

33. Alourani A, Alam M, Ali A, Khan IR, Samal CK. Hybrid AI-IoT framework with digital twin integration for predictive urban infrastructure management in smart cities. Comput Mater Contin. 2025;86(1):1–32. doi:10.32604/cmc.2025.070161. [Google Scholar] [CrossRef]

34. Abdallah W, Alghamdi M. Digital twin-enabled AI for sustainable traffic management: real-time urban mobility optimization in smart cities. PeerJ Comput Sci. 2026;12(7):e3574. doi:10.7717/peerj-cs.3574. [Google Scholar] [CrossRef]

35. Zhang K, Cao J, Zhang Y, Shen S, Wu D. Adaptive digital twin and multiagent deep reinforcement learning for vehicular edge computing and networks. IEEE Trans Ind Inf. 2022;18(2):1405–13. doi:10.1109/tii.2021.3088407. [Google Scholar] [CrossRef]

36. Arsalan A, Umer T, Rehman RA, Bilal M, Mumtaz S. Digital twin-driven federated deep reinforcement learning for mobility-aware UAV-IoT coordination in smart agriculture. SSRN. 2025. doi:10.2139/ssrn.5986035. [Google Scholar] [CrossRef]

37. Kesavan TV, Venkatesan R, Wong WK, Ng PK. Reinforcement learning based secure edge enabled multi-task scheduling model for internet of everything applications. Sci Rep. 2025;15(1):6254. doi:10.1038/s41598-025-89726-2. [Google Scholar] [CrossRef]

38. Cheng N, Lyu F, Quan W, Foh CH, Zhang H, Jia W, et al. Space/aerial-assisted computing offloading for IoT applications: a learning-based approach. IEEE J Sel Areas Commun. 2019;37(5):1117–29. doi:10.1109/jsac.2019.2906789. [Google Scholar] [CrossRef]

39. Puliafito C, Gonçalves DM, Lopes MM, Martins LL, Madeira E, Mingozzi E, et al. MobFogSim: simulation of mobility and migration for fog computing. Simul Model Pract Theory. 2020;101:102062. [Google Scholar]

40. Codecà L. LuSTScenario: Luxembourg SUMO traffic (LuST) scenario. 2016. GitHub Repository, Version 2.0 [cited 2026 Jan 23]. Available from: https://github.com/lcodeca/LuSTScenario. [Google Scholar]

41. Chen X, Jiao L, Li W, Fu X. Efficient multi-user computation offloading for mobile-edge cloud computing. IEEE/ACM Trans Netw. 2015;24(5):2795–808. doi:10.1109/tnet.2015.2487344. [Google Scholar] [CrossRef]

42. Oueis J, Strinati EC, Barbarossa S. The fog balancing: load distribution for small cell cloud computing. In: 2015 IEEE 81st Vehicular Technology Conference (VTC Spring). Piscataway, NJ, USA: IEEE; 2015. p. 1–6. [Google Scholar]


Cite This Article

APA Style
Kaur, N., Mittal, A., Alahmari, S. (2026). Proactive Mobility-Aware Fog Service Continuity Using Digital Twins and GRU–EWMA-Based Association Forecasting. Computers, Materials & Continua, 88(1), 65. https://doi.org/10.32604/cmc.2026.079991
Vancouver Style
Kaur N, Mittal A, Alahmari S. Proactive Mobility-Aware Fog Service Continuity Using Digital Twins and GRU–EWMA-Based Association Forecasting. Comput Mater Contin. 2026;88(1):65. https://doi.org/10.32604/cmc.2026.079991
IEEE Style
N. Kaur, A. Mittal, and S. Alahmari, “Proactive Mobility-Aware Fog Service Continuity Using Digital Twins and GRU–EWMA-Based Association Forecasting,” Comput. Mater. Contin., vol. 88, no. 1, pp. 65, 2026. https://doi.org/10.32604/cmc.2026.079991


cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 300

    View

  • 56

    Download

  • 0

    Like

Share Link