Open Access
ARTICLE
Intelligent Management of Resources for Smart Edge Computing in 5G Heterogeneous Networks Using Blockchain and Deep Learning
1 Department of Computer Science and Artificial Intelligence, College of Computing and Information Technology, University of Bisha, Bisha, P.O. Box 551, Saudi Arabia
2 College of Computing & Information Sciences, University of Science and Technology, Ibri, 516, Oman
* Corresponding Author: Mohammad Tabrez Quasim. Email:
Computers, Materials & Continua 2025, 84(1), 1169-1187. https://doi.org/10.32604/cmc.2025.062989
Received 31 December 2024; Accepted 28 April 2025; Issue published 09 June 2025
Abstract
Smart edge computing (SEC) is a novel paradigm for computing that could transfer cloud-based applications to the edge network, supporting computation-intensive services like face detection and natural language processing. A core feature of mobile edge computing, SEC improves user experience and device performance by offloading local activities to edge processors. In this framework, blockchain technology is utilized to ensure secure and trustworthy communication between edge devices and servers, protecting against potential security threats. Additionally, Deep Learning algorithms are employed to analyze resource availability and optimize computation offloading decisions dynamically. IoT applications that require significant resources can benefit from SEC, which has better coverage. Although access is constantly changing and network devices have heterogeneous resources, it is not easy to create consistent, dependable, and instantaneous communication between edge devices and their processors, specifically in 5G Heterogeneous Network (HN) situations. Thus, an Intelligent Management of Resources for Smart Edge Computing (IMRSEC) framework, which combines blockchain, edge computing, and Artificial Intelligence (AI) into 5G HNs, has been proposed in this paper. As a result, a unique dual schedule deep reinforcement learning (DS-DRL) technique has been developed, consisting of a rapid schedule learning process and a slow schedule learning process. The primary objective is to minimize overall unloading latency and system resource usage by optimizing computation offloading, resource allocation, and application caching. Simulation results demonstrate that the DS-DRL approach reduces task execution time by 32%, validating the method’s effectiveness within the IMRSEC framework.Keywords
Cite This Article

This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.