Home / Journals / CMC / Online First / doi:10.32604/cmc.2024.049228
Special lssues
Table of Content

Open Access

ARTICLE

L1-Smooth SVM with Distributed Adaptive Proximal Stochastic Gradient Descent with Momentum for Fast Brain Tumor Detection

Chuandong Qin1,2, Yu Cao1,*, Liqun Meng1
1 School of Mathematics and Information Science, North Minzu University, Yinchuan, 750021, China
2 Ningxia Key Laboratory of Intelligent Information and Big Data Processing, Yinchuan, 750021, China
* Corresponding Author: Yu Cao. Email: email
(This article belongs to the Special Issue: Advanced Machine Learning and Optimization for Practical Solutions in Complex Real-world Systems)

Computers, Materials & Continua https://doi.org/10.32604/cmc.2024.049228

Received 31 December 2023; Accepted 14 March 2024; Published online 18 April 2024

Abstract

Brain tumors come in various types, each with distinct characteristics and treatment approaches, making manual detection a time-consuming and potentially ambiguous process. Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes. Machine learning models have become key players in automating brain tumor detection. Gradient descent methods are the mainstream algorithms for solving machine learning models. In this paper, we propose a novel distributed proximal stochastic gradient descent approach to solve the L1-Smooth Support Vector Machine (SVM) classifier for brain tumor detection. Firstly, the smooth hinge loss is introduced to be used as the loss function of SVM. It avoids the issue of non-differentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization. Secondly, the L1 regularization method is employed to sparsify features and enhance the robustness of the model. Finally, adaptive proximal stochastic gradient descent (PGD) with momentum, and distributed adaptive PGD with momentum (DPGD) are proposed and applied to the L1-Smooth SVM. Distributed computing is crucial in large-scale data analysis, with its value manifested in extending algorithms to distributed clusters, thus enabling more efficient processing of massive amounts of data. The DPGD algorithm leverages Spark, enabling full utilization of the computer’s multi-core resources. Due to its sparsity induced by L1 regularization on parameters, it exhibits significantly accelerated convergence speed. From the perspective of loss reduction, DPGD converges faster than PGD. The experimental results show that adaptive PGD with momentum and its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection. From pre-trained models, both the PGD and DPGD outperform other models, boasting an accuracy of 95.21%.

Keywords

Support vector machine; proximal stochastic gradient descent; brain tumor detection; distributed computing
  • 112

    View

  • 23

    Download

  • 0

    Like

Share Link