Open Access
ARTICLE
LOEV-APO-MLP: Latin Hypercube Opposition-Based Elite Variation Artificial Protozoa Optimizer for Multilayer Perceptron Training
1 School of Computer Science, Hubei University of Technology, Wuhan, 430068, China
2 Hubei Provincial Key Laboratory of Green Intelligent Computing Power Network, Wuhan, 430068, China
3 Hubei Provincial Engineering Technology Research Centre, Wuhan, 430068, China
* Corresponding Author: Haitao Xie. Email:
Computers, Materials & Continua 2025, 85(3), 5509-5530. https://doi.org/10.32604/cmc.2025.067342
Received 30 April 2025; Accepted 28 August 2025; Issue published 23 October 2025
Abstract
The Multilayer Perceptron (MLP) is a fundamental neural network model widely applied in various domains, particularly for lightweight image classification, speech recognition, and natural language processing tasks. Despite its widespread success, training MLPs often encounter significant challenges, including susceptibility to local optima, slow convergence rates, and high sensitivity to initial weight configurations. To address these issues, this paper proposes a Latin Hypercube Opposition-based Elite Variation Artificial Protozoa Optimizer (LOEV-APO), which enhances both global exploration and local exploitation simultaneously. LOEV-APO introduces a hybrid initialization strategy that combines Latin Hypercube Sampling (LHS) with Opposition-Based Learning (OBL), thus improving the diversity and coverage of the initial population. Moreover, an Elite Protozoa Variation Strategy (EPVS) is incorporated, which applies differential mutation operations to elite candidates, accelerating convergence and strengthening local search capabilities around high-quality solutions. Extensive experiments are conducted on six classification tasks and four function approximation tasks, covering a wide range of problem complexities and demonstrating superior generalization performance. The results demonstrate that LOEV-APO consistently outperforms nine state-of-the-art metaheuristic algorithms and two gradient-based methods in terms of convergence speed, solution accuracy, and robustness. These findings suggest that LOEV-APO serves as a promising optimization tool for MLP training and provides a viable alternative to traditional gradient-based methods.Keywords
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools