Open Access iconOpen Access

ARTICLE

A Latency-Efficient Integration of Channel Attention for ConvNets

Woongkyu Park1, Yeongyu Choi2, Mahammad Shareef Mekala3, Gyu Sang Choi1, Kook-Yeol Yoo1, Ho-youl Jung1,*

1 Department of Information and Communication Engineering, Yeungnam University, Gyeongsan, 38541, Republic of Korea
2 RLRC for Autonomous Vehicle Parts and Materials Innovation, Yeungnam University, Gyeongsan, 38541, Republic of Korea
3 School of Computing, Robert Gordon University, Aberdeen, AB10 7QB, UK

* Corresponding Author: Ho-youl Jung. Email: email

Computers, Materials & Continua 2025, 82(3), 3965-3981. https://doi.org/10.32604/cmc.2025.059966

Abstract

Designing fast and accurate neural networks is becoming essential in various vision tasks. Recently, the use of attention mechanisms has increased, aimed at enhancing the vision task performance by selectively focusing on relevant parts of the input. In this paper, we concentrate on squeeze-and-excitation (SE)-based channel attention, considering the trade-off between latency and accuracy. We propose a variation of the SE module, called squeeze-and-excitation with layer normalization (SELN), in which layer normalization (LN) replaces the sigmoid activation function. This approach reduces the vanishing gradient problem while enhancing feature diversity and discriminability of channel attention. In addition, we propose a latency-efficient model named SELNeXt, where the LN typically used in the ConvNext block is replaced by SELN to minimize additional latency-impacting operations. Through classification simulations on ImageNet-1k, we show that the top-1 accuracy of the proposed SELNeXt outperforms other ConvNeXt-based models in terms of latency efficiency. SELNeXt also achieves better object detection and instance segmentation performance on COCO than Swin Transformer and ConvNeXt for small-sized models. Our results indicate that LN could be a considerable candidate for replacing the activation function in attention mechanisms. In addition, SELNeXt achieves a better accuracy-latency trade-off, making it favorable for real-time applications and edge computing. The code is available at (accessed on 06 December 2024).

Keywords

Attention mechanism; convolutional neural networks; image classification; object detection; semantic segmentation

Cite This Article

APA Style
Park, W., Choi, Y., Mekala, M.S., Choi, G.S., Yoo, K. et al. (2025). A latency-efficient integration of channel attention for convnets. Computers, Materials & Continua, 82(3), 3965–3981. https://doi.org/10.32604/cmc.2025.059966
Vancouver Style
Park W, Choi Y, Mekala MS, Choi GS, Yoo K, Jung H. A latency-efficient integration of channel attention for convnets. Comput Mater Contin. 2025;82(3):3965–3981. https://doi.org/10.32604/cmc.2025.059966
IEEE Style
W. Park, Y. Choi, M. S. Mekala, G. S. Choi, K. Yoo, and H. Jung, “A Latency-Efficient Integration of Channel Attention for ConvNets,” Comput. Mater. Contin., vol. 82, no. 3, pp. 3965–3981, 2025. https://doi.org/10.32604/cmc.2025.059966



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 270

    View

  • 250

    Download

  • 0

    Like

Share Link