TY - EJOU AU - Wang, Chaobin AU - Tang, Xianghong AU - Lu, Jianguang AU - Yang, Jing AU - Yuan, Panliang TI - A Deep Reinforcement Learning-Based Pre-Allocation Mechanism for Efficient Task Offloading in Mobile Edge Computing T2 - Computers, Materials \& Continua PY - VL - IS - SN - 1546-2226 AB - Mobile Edge Computing (MEC) facilitates the rapid response and energy-efficient execution of tasks on mobile devices. However, determining whether and where to offload tasks remains a significant challenge due to the constantly changing character of workloads in MEC environments. To address this issue, this paper proposes PreAlloc-A2C—a deep reinforcement learning actor-critic-based framework that calculates allocation scores by leveraging both task features (task size, required completion time, and waiting time) and server features (queue length and historical workload). This design enables fully distributed task offloading decisions without centralized coordination. Additionally, a Long Short-Term Memory (LSTM) network is integrated to forecast impending server loads, thereby supporting adaptive scheduling. A tailored reward function is also designed to jointly optimize three key performance metrics: task delay, device energy consumption, and task drop rate. Extensive experiments are conducted to evaluate PreAlloc-A2C against five baseline algorithms: Particle Swarm Optimization (PSO), Advantage Actor-Critic (A2C), Deep Q-Network (DQN), Double Deep Q-Network (DDQN), and Dueling Deep Q-Network (Dueling DQN). The results show that PreAlloc-A2C outperforms all baselines, achieving lower latency, reduced energy consumption, and a lower task drop rate. KW - Mobile edge computing; task offloading; deep reinforcement learning DO - 10.32604/cmc.2026.078998