Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (42)
  • Open Access

    ARTICLE

    Evolutionary Algorithm Based Task Scheduling in IoT Enabled Cloud Environment

    R. Joshua Samuel Raj1, M. Varalatchoumy2, V. L. Helen Josephine3, A. Jegatheesan4, Seifedine Kadry5, Maytham N. Meqdad6, Yunyoung Nam7,*

    CMC-Computers, Materials & Continua, Vol.71, No.1, pp. 1095-1109, 2022, DOI:10.32604/cmc.2022.021859

    Abstract Internet of Things (IoT) is transforming the technical setting of conventional systems and finds applicability in smart cities, smart healthcare, smart industry, etc. In addition, the application areas relating to the IoT enabled models are resource-limited and necessitate crisp responses, low latencies, and high bandwidth, which are beyond their abilities. Cloud computing (CC) is treated as a resource-rich solution to the above mentioned challenges. But the intrinsic high latency of CC makes it nonviable. The longer latency degrades the outcome of IoT based smart systems. CC is an emergent dispersed, inexpensive computing pattern with massive assembly of heterogeneous autonomous systems.… More >

  • Open Access

    ARTICLE

    A New Task Scheduling Scheme Based on Genetic Algorithm for Edge Computing

    Zhang Nan1, Li Wenjing1,*, Liu Zhu1, Li Zhi1, Liu Yumin1, Nurun Nahar2

    CMC-Computers, Materials & Continua, Vol.71, No.1, pp. 843-854, 2022, DOI:10.32604/cmc.2022.017504

    Abstract With the continuous evolution of smart grid and global energy interconnection technology, amount of intelligent terminals have been connected to power grid, which can be used for providing resource services as edge nodes. Traditional cloud computing can be used to provide storage services and task computing services in the power grid, but it faces challenges such as resource bottlenecks, time delays, and limited network bandwidth resources. Edge computing is an effective supplement for cloud computing, because it can provide users with local computing services with lower latency. However, because the resources in a single edge node are limited, resource-intensive tasks… More >

  • Open Access

    ARTICLE

    Novel Power-Aware Optimization Methodology and Efficient Task Scheduling Algorithm

    K. Sathis Kumar1,*, K. Paramasivam2

    Computer Systems Science and Engineering, Vol.41, No.1, pp. 209-224, 2022, DOI:10.32604/csse.2022.019531

    Abstract The performance of central processing units (CPUs) can be enhanced by integrating multiple cores into a single chip. Cpu performance can be improved by allocating the tasks using intelligent strategy. If Small tasks wait for long time or executes for long time, then CPU consumes more power. Thus, the amount of power consumed by CPUs can be reduced without increasing the frequency. Lines are used to connect cores, which are organized together to form a network called network on chips (NOCs). NOCs are mainly used in the design of processors. However, its performance can still be enhanced by reducing power… More >

  • Open Access

    ARTICLE

    Energy-Aware Scheduling for Tasks with Target-Time in Blockchain based Data Centres

    I. Devi*, G.R. Karpagam

    Computer Systems Science and Engineering, Vol.40, No.2, pp. 405-419, 2022, DOI:10.32604/csse.2022.018573

    Abstract

    Cloud computing infrastructures have intended to provide computing services to end-users through the internet in a pay-per-use model. The extensive deployment of the Cloud and continuous increment in the capacity and utilization of data centers (DC) leads to massive power consumption. This intensifying scale of DCs has made energy consumption a critical concern. This paper emphasizes the task scheduling algorithm by formulating the system model to minimize the makespan and energy consumption incurred in a data center. Also, an energy-aware task scheduling in the Blockchain-based data center was proposed to offer an optimal solution that minimizes makespan and energy consumption.… More >

  • Open Access

    ARTICLE

    Task Scheduling Optimization in Cloud Computing Based on Genetic Algorithms

    Ahmed Y. Hamed1,*, Monagi H. Alkinani2

    CMC-Computers, Materials & Continua, Vol.69, No.3, pp. 3289-3301, 2021, DOI:10.32604/cmc.2021.018658

    Abstract Task scheduling is the main problem in cloud computing that reduces system performance; it is an important way to arrange user needs and perform multiple goals. Cloud computing is the most popular technology nowadays and has many research potential in various areas like resource allocation, task scheduling, security, privacy, etc. To improve system performance, an efficient task-scheduling algorithm is required. Existing task-scheduling algorithms focus on task-resource requirements, CPU memory, execution time, and execution cost. In this paper, a task scheduling algorithm based on a Genetic Algorithm (GA) has been presented for assigning and executing different tasks. The proposed algorithm aims… More >

  • Open Access

    ARTICLE

    Monarch Butterfly Optimization for Reliable Scheduling in Cloud

    B. Gomathi1, S. T. Suganthi2,*, Karthikeyan Krishnasamy3, J. Bhuvana4

    CMC-Computers, Materials & Continua, Vol.69, No.3, pp. 3693-3710, 2021, DOI:10.32604/cmc.2021.018159

    Abstract Enterprises have extensively taken on cloud computing environment since it provides on-demand virtualized cloud application resources. The scheduling of the cloud tasks is a well-recognized NP-hard problem. The Task scheduling problem is convoluted while convincing different objectives, which are dispute in nature. In this paper, Multi-Objective Improved Monarch Butterfly Optimization (MOIMBO) algorithm is applied to solve multi-objective task scheduling problems in the cloud in preparation for Pareto optimal solutions. Three different dispute objectives, such as makespan, reliability, and resource utilization, are deliberated for task scheduling problems.The Epsilon-fuzzy dominance sort method is utilized in the multi-objective domain to elect the foremost… More >

  • Open Access

    ARTICLE

    Run-Time Dynamic Resource Adjustment for Mitigating Skew in MapReduce

    Zhihong Liu1, Shuo Zhang2,*, Yaping Liu2, Xiangke Wang1, Dong Yin1

    CMES-Computer Modeling in Engineering & Sciences, Vol.126, No.2, pp. 771-790, 2021, DOI:10.32604/cmes.2021.013244

    Abstract MapReduce is a widely used programming model for large-scale data processing. However, it still suffers from the skew problem, which refers to the case in which load is imbalanced among tasks. This problem can cause a small number of tasks to consume much more time than other tasks, thereby prolonging the total job completion time. Existing solutions to this problem commonly predict the loads of tasks and then rebalance the load among them. However, solutions of this kind often incur high performance overhead due to the load prediction and rebalancing. Moreover, existing solutions target the partitioning skew for reduce tasks,… More >

  • Open Access

    ARTICLE

    Application Layer Scheduling in Cloud: Fundamentals, Review and Research Directions

    Vaibhav Pandey, Poonam Saini

    Computer Systems Science and Engineering, Vol.34, No.6, pp. 357-376, 2019, DOI:10.32604/csse.2019.34.357

    Abstract The cloud computing paradigm facilitates a finite pool of on-demand virtualized resources on a pay-per-use basis. For large-scale heterogeneous distributed systems like a cloud, scheduling is an essential component of resource management at the application layer as well as at the virtualization layer in order to deliver the optimal Quality of Services (QoS). The cloud scheduling, in general, is an NP-hard problem due to large solution space, thus, it is difficult to find an optimal solution within a reasonable time. In application layer scheduling, the tasks are mapped to logical resources (i.e., virtual machines), aiming to optimize one or more… More >

  • Open Access

    ARTICLE

    A Load Balanced Task Scheduling Heuristic for Large-Scale Computing Systems

    Sardar Khaliq uz Zaman1, Tahir Maqsood1, Mazhar Ali1, Kashif Bilal1, Sajjad A. Madani1, Atta ur Rehman Khan2,*

    Computer Systems Science and Engineering, Vol.34, No.2, pp. 79-90, 2019, DOI:10.32604/csse.2019.34.079

    Abstract Optimal task allocation in Large-Scale Computing Systems (LSCSs) that endeavors to balance the load across limited computing resources is considered an NP-hard problem. MinMin algorithm is one of the most widely used heuristic for scheduling tasks on limited computing resources. The MinMin minimizes makespan compared to other algorithms, such as Heterogeneous Earliest Finish Time (HEFT), duplication based algorithms, and clustering algorithms. However, MinMin results in unbalanced utilization of resources especially when majority of tasks have lower computational requirements. In this work we consider a computational model where each machine has certain bounded capacity to execute a predefined number of tasks… More >

  • Open Access

    ARTICLE

    TSLBS: A Time-Sensitive and Load Balanced Scheduling Approach to Wireless Sensor Actor Networks

    Morteza Okhovvat, Mohammad Reza Kangavari*

    Computer Systems Science and Engineering, Vol.34, No.1, pp. 13-21, 2019, DOI:10.32604/csse.2019.34.013

    Abstract Existing works on scheduling in Wireless Sensor Actor Networks (WSANs) are mostly concerned with energy savings and ignore time constraints and thus increase the make-span of the network. Moreover, these algorithms usually do not consider balance of workloads on the actor nodes and hence, sometimes some of the actors are busy when some others are idle. These problem causes the actors are not utilized properly and the actors’ lifetime is reduced. In this paper we take both time awareness and balance of workloads on the actor in WSANs into account and propose a convex optimization model (TAMMs) to minimize make-span.… More >

Displaying 31-40 on page 4 of 42. Per Page