Open Access
ARTICLE
Med-ReLU: A Parameter-Free Hybrid Activation Function for Deep Artificial Neural Network Used in Medical Image Segmentation
1 Department of Instrumentation and Control Engineering, Universiti Kuala Lumpur Malaysian Institute of Industrial Technology (UniKL MITEC), Bandar Seri Alam, Masai, 81750, Johor, Malaysia
2 Department of Electrical Engineering, College of Engineering, Qassim University, Buraydah, 52571, Saudi Arabia
3 Data Science Institute, University of Galway, IDA Business Park, Lower Dangan, Galway, H91 AEX4, Ireland
4 Department of Information Technology, College of Computer, Qassim University, Buraydah, 51452, Saudi Arabia
5 Department of Electrical Engineering, College of Engineering and Information Technology, Onaizah Colleges, Qassim, 56447, Saudi Arabia
* Corresponding Author: Muhammad Islam. Email:
Computers, Materials & Continua 2025, 84(2), 3029-3051. https://doi.org/10.32604/cmc.2025.064660
Received 21 February 2025; Accepted 17 April 2025; Issue published 03 July 2025
Abstract
Deep learning (DL), derived from the domain of Artificial Neural Networks (ANN), forms one of the most essential components of modern deep learning algorithms. DL segmentation models rely on layer-by-layer convolution-based feature representation, guided by forward and backward propagation. A critical aspect of this process is the selection of an appropriate activation function (AF) to ensure robust model learning. However, existing activation functions often fail to effectively address the vanishing gradient problem or are complicated by the need for manual parameter tuning. Most current research on activation function design focuses on classification tasks using natural image datasets such as MNIST, CIFAR-10, and CIFAR-100. To address this gap, this study proposes Med-ReLU, a novel activation function specifically designed for medical image segmentation. Med-ReLU prevents deep learning models from suffering dead neurons or vanishing gradient issues. It is a hybrid activation function that combines the properties of ReLU and Softsign. For positive inputs, Med-ReLU adopts the linear behavior of ReLU to avoid vanishing gradients, while for negative inputs, it exhibits the Softsign’s polynomial convergence, ensuring robust training and avoiding inactive neurons across the training set. The training performance and segmentation accuracy of Med-ReLU have been thoroughly evaluated, demonstrating stable learning behavior and resistance to overfitting. It consistently outperforms state-of-the-art activation functions in medical image segmentation tasks. Designed as a parameter-free function, Med-ReLU is simple to implement in complex deep learning architectures, and its effectiveness spans various neural network models and anomaly detection scenarios.Keywords
Cite This Article

This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.