Open Access iconOpen Access

ARTICLE

crossmark

LOEV-APO-MLP: Latin Hypercube Opposition-Based Elite Variation Artificial Protozoa Optimizer for Multilayer Perceptron Training

Zhiwei Ye1,2,3, Dingfeng Song1, Haitao Xie1,2,3,*, Jixin Zhang1,2, Wen Zhou1,2, Mengya Lei1,2, Xiao Zheng1,2, Jie Sun1, Jing Zhou1, Mengxuan Li1

1 School of Computer Science, Hubei University of Technology, Wuhan, 430068, China
2 Hubei Provincial Key Laboratory of Green Intelligent Computing Power Network, Wuhan, 430068, China
3 Hubei Provincial Engineering Technology Research Centre, Wuhan, 430068, China

* Corresponding Author: Haitao Xie. Email: email

Computers, Materials & Continua 2025, 85(3), 5509-5530. https://doi.org/10.32604/cmc.2025.067342

Abstract

The Multilayer Perceptron (MLP) is a fundamental neural network model widely applied in various domains, particularly for lightweight image classification, speech recognition, and natural language processing tasks. Despite its widespread success, training MLPs often encounter significant challenges, including susceptibility to local optima, slow convergence rates, and high sensitivity to initial weight configurations. To address these issues, this paper proposes a Latin Hypercube Opposition-based Elite Variation Artificial Protozoa Optimizer (LOEV-APO), which enhances both global exploration and local exploitation simultaneously. LOEV-APO introduces a hybrid initialization strategy that combines Latin Hypercube Sampling (LHS) with Opposition-Based Learning (OBL), thus improving the diversity and coverage of the initial population. Moreover, an Elite Protozoa Variation Strategy (EPVS) is incorporated, which applies differential mutation operations to elite candidates, accelerating convergence and strengthening local search capabilities around high-quality solutions. Extensive experiments are conducted on six classification tasks and four function approximation tasks, covering a wide range of problem complexities and demonstrating superior generalization performance. The results demonstrate that LOEV-APO consistently outperforms nine state-of-the-art metaheuristic algorithms and two gradient-based methods in terms of convergence speed, solution accuracy, and robustness. These findings suggest that LOEV-APO serves as a promising optimization tool for MLP training and provides a viable alternative to traditional gradient-based methods.

Keywords

Artificial protozoa optimizer; multilayer perceptron; Latin hypercube sampling; opposition-based learning; neural network training

Cite This Article

APA Style
Ye, Z., Song, D., Xie, H., Zhang, J., Zhou, W. et al. (2025). LOEV-APO-MLP: Latin Hypercube Opposition-Based Elite Variation Artificial Protozoa Optimizer for Multilayer Perceptron Training. Computers, Materials & Continua, 85(3), 5509–5530. https://doi.org/10.32604/cmc.2025.067342
Vancouver Style
Ye Z, Song D, Xie H, Zhang J, Zhou W, Lei M, et al. LOEV-APO-MLP: Latin Hypercube Opposition-Based Elite Variation Artificial Protozoa Optimizer for Multilayer Perceptron Training. Comput Mater Contin. 2025;85(3):5509–5530. https://doi.org/10.32604/cmc.2025.067342
IEEE Style
Z. Ye et al., “LOEV-APO-MLP: Latin Hypercube Opposition-Based Elite Variation Artificial Protozoa Optimizer for Multilayer Perceptron Training,” Comput. Mater. Contin., vol. 85, no. 3, pp. 5509–5530, 2025. https://doi.org/10.32604/cmc.2025.067342



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 456

    View

  • 171

    Download

  • 0

    Like

Share Link