Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (1)
  • Open Access

    ARTICLE

    A Resource-Efficient Convolutional Neural Network Accelerator Using Fine-Grained Logarithmic Quantization

    Hadee Madadum*, Yasar Becerikli

    Intelligent Automation & Soft Computing, Vol.33, No.2, pp. 681-695, 2022, DOI:10.32604/iasc.2022.023831

    Abstract Convolutional Neural Network (ConNN) implementations on Field Programmable Gate Array (FPGA) are being studied since the computational capabilities of FPGA have been improved recently. Model compression is required to enable ConNN deployment on resource-constrained FPGA devices. Logarithmic quantization is one of the efficient compression methods that can compress a model to very low bit-width without significant deterioration in performance. It is also hardware-friendly by using bitwise operations for multiplication. However, the logarithmic suffers from low resolution at high inputs due to exponential properties. Therefore, we propose a modified logarithmic quantization method with a fine resolution to compress a neural network… More >

Displaying 1-10 on page 1 of 1. Per Page