Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (18)
  • Open Access

    ARTICLE

    Improving Cache Management with Redundant RDDs Eviction in Spark

    Yao Zhao1, Jian Dong1,*, Hongwei Liu1, Jin Wu2, Yanxin Liu1

    CMC-Computers, Materials & Continua, Vol.68, No.1, pp. 727-741, 2021, DOI:10.32604/cmc.2021.016462

    Abstract Efficient cache management plays a vital role in in-memory data-parallel systems, such as Spark, Tez, Storm and HANA. Recent research, notably research on the Least Reference Count (LRC) and Most Reference Distance (MRD) policies, has shown that dependency-aware caching management practices that consider the application’s directed acyclic graph (DAG) perform well in Spark. However, these practices ignore the further relationship between RDDs and cached some redundant RDDs with the same child RDDs, which degrades the memory performance. Hence, in memory-constrained situations, systems may encounter a performance bottleneck due to frequent data block replacement. In addition, the prefetch mechanisms in some… More >

  • Open Access

    ARTICLE

    Efficient Algorithms for Cache-Throughput Analysis in Cellular-D2D 5G Networks

    Nasreen Anjum1,*, Zhaohui Yang1, Imran Khan2, Mahreen Kiran3, Falin Wu4, Khaled Rabie5, Shikh Muhammad Bahaei1

    CMC-Computers, Materials & Continua, Vol.67, No.2, pp. 1759-1780, 2021, DOI:10.32604/cmc.2021.014635

    Abstract In this paper, we propose a two-tiered segment-based Device-to-Device (S-D2D) caching approach to decrease the startup and playback delay experienced by Video-on-Demand (VoD) users in a cellular network. In the S-D2D caching approach cache space of each mobile device is divided into two cache-blocks. The first cache-block reserve for caching and delivering the beginning portion of the most popular video files and the second cache-block caches the latter portion of the requested video files ‘fully or partially’ depending on the users’ video watching behaviour and popularity of videos. In this approach before caching, video is divided and grouped in a… More >

  • Open Access

    ARTICLE

    A Cache Replacement Policy Based on Multi-Factors for Named Data Networking

    Meiju Yu1, Ru Li1, *, Yuwen Chen2

    CMC-Computers, Materials & Continua, Vol.65, No.1, pp. 321-336, 2020, DOI:10.32604/cmc.2020.010831

    Abstract Named Data Networking (NDN) is one of the most excellent future Internet architectures and every router in NDN has the capacity of caching contents passing by. It greatly reduces network traffic and improves the speed of content distribution and retrieval. In order to make full use of the limited caching space in routers, it is an urgent challenge to make an efficient cache replacement policy. However, the existing cache replacement policies only consider very few factors that affect the cache performance. In this paper, we present a cache replacement policy based on multi-factors for NDN (CRPM), in which the content… More >

  • Open Access

    ARTICLE

    Massive Files Prefetching Model Based on LSTM Neural Network with Cache Transaction Strategy

    Dongjie Zhu1, Haiwen Du6, Yundong Sun1, Xiaofang Li2, Rongning Qu2, Hao Hu1, Shuangshuang Dong1, Helen Min Zhou3, Ning Cao4, 5, *,

    CMC-Computers, Materials & Continua, Vol.63, No.2, pp. 979-993, 2020, DOI:10.32604/cmc.2020.06478

    Abstract In distributed storage systems, file access efficiency has an important impact on the real-time nature of information forensics. As a popular approach to improve file accessing efficiency, prefetching model can fetches data before it is needed according to the file access pattern, which can reduce the I/O waiting time and increase the system concurrency. However, prefetching model needs to mine the degree of association between files to ensure the accuracy of prefetching. In the massive small file situation, the sheer volume of files poses a challenge to the efficiency and accuracy of relevance mining. In this paper, we propose a… More >

  • Open Access

    ARTICLE

    A Dynamic Memory Allocation Optimization Mechanism Based on Spark

    Suzhen Wang1, Shanshan Geng1, Zhanfeng Zhang1, Anshan Ye2, Keming Chen2, Zhaosheng Xu2, Huimin Luo2, Gangshan Wu3,*, Lina Xu4, Ning Cao5

    CMC-Computers, Materials & Continua, Vol.61, No.2, pp. 739-757, 2019, DOI:10.32604/cmc.2019.06097

    Abstract Spark is a distributed data processing framework based on memory. Memory allocation is a focus question of Spark research. A good memory allocation scheme can effectively improve the efficiency of task execution and memory resource utilization of the Spark. Aiming at the memory allocation problem in the Spark2.x version, this paper optimizes the memory allocation strategy by analyzing the Spark memory model, the existing cache replacement algorithms and the memory allocation methods, which is on the basis of minimizing the storage area and allocating the execution area according to the demand. It mainly including two parts: cache replacement optimization and… More >

  • Open Access

    ARTICLE

    An Improved Memory Cache Management Study Based on Spark

    Suzhen Wang1, Yanpiao Zhang1, Lu Zhang1, Ning Cao2, *, Chaoyi Pang3

    CMC-Computers, Materials & Continua, Vol.56, No.3, pp. 415-431, 2018, DOI: 10.3970/cmc.2018.03716

    Abstract Spark is a fast unified analysis engine for big data and machine learning, in which the memory is a crucial resource. Resilient Distribution Datasets (RDDs) are parallel data structures that allow users explicitly persist intermediate results in memory or on disk, and each one can be divided into several partitions. During task execution, Spark automatically monitors cache usage on each node. And when there is a RDD that needs to be stored in the cache where the space is insufficient, the system would drop out old data partitions in a least recently used (LRU) fashion to release more space. However,… More >

  • Open Access

    ARTICLE

    RETRACTED: Mitigating Content Caching Attack in NDN

    Zhiqiang Ruan1,*, Haibo Luo1, Wenzhong Lin1, Jie Wang2

    CMC-Computers, Materials & Continua, Vol.56, No.3, pp. 483-499, 2018, DOI: 10.3970/cmc.2018.03687

    Abstract Content caching is a core component in Named Data Networking (NDN), where content is cached in routers and served for future requests. However, the adversary can launch verification attack by placing poisoned data into the network with a legitimate name and allow the routers in the delivery path to frequently identify the content. Since NDN employs digital signature on each piece of content, verifying all content will exhaust routers’ computational resources due to the massive data in the network. In this paper, we propose a selective verification scheme upon the contents that are hit in the content store and allow… More >

  • Open Access

    ARTICLE

    GFCache: A Greedy Failure Cache Considering Failure Recency and Failure Frequency for an Erasure-Coded Storage System

    Mingzhu Deng1, Fang Liu2,*, Ming Zhao3, Zhiguang Chen2, Nong Xiao2,1

    CMC-Computers, Materials & Continua, Vol.58, No.1, pp. 153-167, 2019, DOI:10.32604/cmc.2019.03585

    Abstract In the big data era, data unavailability, either temporary or permanent, becomes a normal occurrence on a daily basis. Unlike the permanent data failure, which is fixed through a background job, temporarily unavailable data is recovered on-the-fly to serve the ongoing read request. However, those newly revived data is discarded after serving the request, due to the assumption that data experiencing temporary failures could come back alive later. Such disposal of failure data prevents the sharing of failure information among clients, and leads to many unnecessary data recovery processes, (e.g. caused by either recurring unavailability of a data or multiple… More >

Displaying 11-20 on page 2 of 18. Per Page