Open Access iconOpen Access

ARTICLE

crossmark

L-Smooth SVM with Distributed Adaptive Proximal Stochastic Gradient Descent with Momentum for Fast Brain Tumor Detection

Chuandong Qin1,2, Yu Cao1,*, Liqun Meng1

1 School of Mathematics and Information Science, North Minzu University, Yinchuan, 750021, China
2 Ningxia Key Laboratory of Intelligent Information and Big Data Processing, Yinchuan, 750021, China

* Corresponding Author: Yu Cao. Email: email

(This article belongs to the Special Issue: Advanced Machine Learning and Optimization for Practical Solutions in Complex Real-world Systems)

Computers, Materials & Continua 2024, 79(2), 1975-1994. https://doi.org/10.32604/cmc.2024.049228

Abstract

Brain tumors come in various types, each with distinct characteristics and treatment approaches, making manual detection a time-consuming and potentially ambiguous process. Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes. Machine learning models have become key players in automating brain tumor detection. Gradient descent methods are the mainstream algorithms for solving machine learning models. In this paper, we propose a novel distributed proximal stochastic gradient descent approach to solve the L-Smooth Support Vector Machine (SVM) classifier for brain tumor detection. Firstly, the smooth hinge loss is introduced to be used as the loss function of SVM. It avoids the issue of non-differentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization. Secondly, the L regularization method is employed to sparsify features and enhance the robustness of the model. Finally, adaptive proximal stochastic gradient descent (PGD) with momentum, and distributed adaptive PGD with momentum (DPGD) are proposed and applied to the L-Smooth SVM. Distributed computing is crucial in large-scale data analysis, with its value manifested in extending algorithms to distributed clusters, thus enabling more efficient processing of massive amounts of data. The DPGD algorithm leverages Spark, enabling full utilization of the computer’s multi-core resources. Due to its sparsity induced by L regularization on parameters, it exhibits significantly accelerated convergence speed. From the perspective of loss reduction, DPGD converges faster than PGD. The experimental results show that adaptive PGD with momentum and its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection. From pre-trained models, both the PGD and DPGD outperform other models, boasting an accuracy of 95.21%.

Keywords


Cite This Article

APA Style
Qin, C., Cao, Y., Meng, L. (2024). l-smooth SVM with distributed adaptive proximal stochastic gradient descent with momentum for fast brain tumor detection. Computers, Materials & Continua, 79(2), 1975-1994. https://doi.org/10.32604/cmc.2024.049228
Vancouver Style
Qin C, Cao Y, Meng L. l-smooth SVM with distributed adaptive proximal stochastic gradient descent with momentum for fast brain tumor detection. Comput Mater Contin. 2024;79(2):1975-1994 https://doi.org/10.32604/cmc.2024.049228
IEEE Style
C. Qin, Y. Cao, and L. Meng, “L-Smooth SVM with Distributed Adaptive Proximal Stochastic Gradient Descent with Momentum for Fast Brain Tumor Detection,” Comput. Mater. Contin., vol. 79, no. 2, pp. 1975-1994, 2024. https://doi.org/10.32604/cmc.2024.049228



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 725

    View

  • 228

    Download

  • 0

    Like

Share Link