Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (3)
  • Open Access

    ARTICLE

    ADFEmu: Enhancing Firmware Fuzzing with Direct Memory Access (DMA) Input Emulation Using Concolic Execution and Large Language Models (LLMs)

    Yixin Ding1, Xinjian Zhao1, Zicheng Wu1, Yichen Zhu2, Longkun Bai2, Hao Han2,*

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 5977-5993, 2025, DOI:10.32604/cmc.2025.065672 - 30 July 2025

    Abstract Fuzz testing is a widely adopted technique for uncovering bugs and security vulnerabilities in embedded firmware. However, many embedded systems heavily rely on peripherals, rendering conventional fuzzing techniques ineffective. When peripheral responses are missing or incorrect, fuzzing a firmware may crash or exit prematurely, significantly limiting code coverage. While prior re-hosting approaches have made progress in simulating Memory-Mapped Input/Output (MMIO) and interrupt-based peripherals, they either ignore Direct Memory Access (DMA) or handle it oversimplified. In this work, we present ADFEmu, a novel automated firmware re-hosting framework that enables effective fuzzing of DMA-enabled firmware. ADFEmu integrates… More >

  • Open Access

    ARTICLE

    CNN Accelerator Using Proposed Diagonal Cyclic Array for Minimizing Memory Accesses

    Hyun-Wook Son1, Ali A. Al-Hamid1,2, Yong-Seok Na1, Dong-Yeong Lee1, Hyung-Won Kim1,*

    CMC-Computers, Materials & Continua, Vol.76, No.2, pp. 1665-1687, 2023, DOI:10.32604/cmc.2023.038760 - 30 August 2023

    Abstract This paper presents the architecture of a Convolution Neural Network (CNN) accelerator based on a new processing element (PE) array called a diagonal cyclic array (DCA). As demonstrated, it can significantly reduce the burden of repeated memory accesses for feature data and weight parameters of the CNN models, which maximizes the data reuse rate and improve the computation speed. Furthermore, an integrated computation architecture has been implemented for the activation function, max-pooling, and activation function after convolution calculation, reducing the hardware resource. To evaluate the effectiveness of the proposed architecture, a CNN accelerator has been… More >

  • Open Access

    ARTICLE

    Characterization of Memory Access in Deep Learning and Its Implications in Memory Management

    Jeongha Lee1, Hyokyung Bahn2,*

    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 607-629, 2023, DOI:10.32604/cmc.2023.039236 - 08 June 2023

    Abstract Due to the recent trend of software intelligence in the Fourth Industrial Revolution, deep learning has become a mainstream workload for modern computer systems. Since the data size of deep learning increasingly grows, managing the limited memory capacity efficiently for deep learning workloads becomes important. In this paper, we analyze memory accesses in deep learning workloads and find out some unique characteristics differentiated from traditional workloads. First, when comparing instruction and data accesses, data access accounts for 96%–99% of total memory accesses in deep learning workloads, which is quite different from traditional workloads. Second, when… More >

Displaying 1-10 on page 1 of 3. Per Page