Home / Journals / CMC / Online First / doi:10.32604/cmc.2026.077087
Special Issues
Table of Content

Open Access

ARTICLE

Improving Convolutional Neural Network Performance Using Alpha-Based Adaptive Pooling for Image Classification

Nahdi Saubari1,2,*, Kunfeng Wang1,*, Rachmat Muwardi3,*, Andri Pranolo4
1 College of Information Science and Technology Program of Controlling Science and Engineering, Beijing University of Chemical Technology, Beijing, China
2 Informatics Department, Universitas Muhammadiyah Banjarmasin, Banjarmasin, Indonesia
3 Department of Electrical Engineering, Universitas Mercu Buana, Jakarta, Indonesia
4 Informatics Department, Universitas Ahmad Dahlan, Yogyakarta, Indonesia
* Corresponding Author: Nahdi Saubari. Email: email; Kunfeng Wang. Email: email; Rachmat Muwardi. Email: email
(This article belongs to the Special Issue: Novel Methods for Image Classification, Object Detection, and Segmentation, 2nd Edition)

Computers, Materials & Continua https://doi.org/10.32604/cmc.2026.077087

Received 02 December 2025; Accepted 10 February 2026; Published online 28 February 2026

Abstract

This study proposes an Adaptive Pooling method based on an alpha (α) parameter to enhance the effectiveness and stability of convolutional neural networks (CNNs) in image classification tasks. Conventional pooling techniques, such as max pooling and average pooling, often exhibit limited adaptability when applied to datasets with heterogeneous distributions and varying levels of complexity. To address this limitation, the proposed approach introduces an α parameter ranging from 0 to 1 that continuously regulates the contribution of maximum-based and average-based pooling operations in a unified and flexible framework. The proposed method is evaluated using two benchmark datasets, MNIST and CIFAR-10, representing grayscale and color image classification scenarios, respectively. Experiments are conducted across three CNN families with different depths LeNet-5, a deeper custom-built CNN, and ResNet-18 to assess robustness under varying representational capacity. Under the best α setting with a 4 × 4 pooling configuration, Adaptive Pooling exhibits architecture-dependent behavior. On LeNet-5, Adaptive Pooling achieves 87.2% on MNIST and 30.1% on CIFAR-10, compared with 97.8% (max/average pooling) on MNIST and 60.1% (max pooling)/53.9% (average pooling) on CIFAR-10. In contrast, on the deeper custom CNN, Adaptive Pooling becomes competitive, reaching 99.7% on MNIST and 86.1% on CIFAR-10, which is comparable to 99.6%–99.7% on MNIST and 84.5%–86.2% on CIFAR-10 achieved by conventional pooling. On ResNet-18, Adaptive Pooling attains 99.1% on MNIST, while CIFAR-10 performance decreases to 37.2% relative to the default global average pooling baseline (99.7% on MNIST and 89.0% on CIFAR-10), suggesting that performance also depends on where the pooling replacement is applied. Overall, these findings indicate that α-controlled Adaptive Pooling provides a lightweight and configurable pooling strategy that can improve stability and achieve competitive accuracy in deeper CNNs, although it should be treated as a complementary mechanism rather than a universal replacement across all architectures.

Keywords

Adaptive pooling; alpha parameter; convolutional neural network (CNN); image classification; MNIST; CIFAR-10; model generalization
  • 4

    View

  • 1

    Download

  • 0

    Like

Share Link