Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (2)
  • Open Access


    Hybrid Chaotic Salp Swarm with Crossover Algorithm for Underground Wireless Sensor Networks

    Mariem Ayedi1,2,*, Walaa H. ElAshmawi3,4, Esraa Eldesouky1,3

    CMC-Computers, Materials & Continua, Vol.72, No.2, pp. 2963-2980, 2022, DOI:10.32604/cmc.2022.025741

    Abstract Resource management in Underground Wireless Sensor Networks (UWSNs) is one of the pillars to extend the network lifetime. An intriguing design goal for such networks is to achieve balanced energy and spectral resource utilization. This paper focuses on optimizing the resource efficiency in UWSNs where underground relay nodes amplify and forward sensed data, received from the buried source nodes through a lossy soil medium, to the aboveground base station. A new algorithm called the Hybrid Chaotic Salp Swarm and Crossover (HCSSC) algorithm is proposed to obtain the optimal source and relay transmission powers to maximize the network resource efficiency. The… More >

  • Open Access


    A Resource-Efficient Convolutional Neural Network Accelerator Using Fine-Grained Logarithmic Quantization

    Hadee Madadum*, Yasar Becerikli

    Intelligent Automation & Soft Computing, Vol.33, No.2, pp. 681-695, 2022, DOI:10.32604/iasc.2022.023831

    Abstract Convolutional Neural Network (ConNN) implementations on Field Programmable Gate Array (FPGA) are being studied since the computational capabilities of FPGA have been improved recently. Model compression is required to enable ConNN deployment on resource-constrained FPGA devices. Logarithmic quantization is one of the efficient compression methods that can compress a model to very low bit-width without significant deterioration in performance. It is also hardware-friendly by using bitwise operations for multiplication. However, the logarithmic suffers from low resolution at high inputs due to exponential properties. Therefore, we propose a modified logarithmic quantization method with a fine resolution to compress a neural network… More >

Displaying 1-10 on page 1 of 2. Per Page  

Share Link