Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (4)
  • Open Access

    ARTICLE

    VGWO: Variant Grey Wolf Optimizer with High Accuracy and Low Time Complexity

    Junqiang Jiang1,2, Zhifang Sun1, Xiong Jiang1, Shengjie Jin1, Yinli Jiang3, Bo Fan1,*

    CMC-Computers, Materials & Continua, Vol.77, No.2, pp. 1617-1644, 2023, DOI:10.32604/cmc.2023.041973

    Abstract The grey wolf optimizer (GWO) is a swarm-based intelligence optimization algorithm by simulating the steps of searching, encircling, and attacking prey in the process of wolf hunting. Along with its advantages of simple principle and few parameters setting, GWO bears drawbacks such as low solution accuracy and slow convergence speed. A few recent advanced GWOs are proposed to try to overcome these disadvantages. However, they are either difficult to apply to large-scale problems due to high time complexity or easily lead to early convergence. To solve the abovementioned issues, a high-accuracy variable grey wolf optimizer… More >

  • Open Access

    ARTICLE

    A Novel Approach to Design Distribution Preserving Framework for Big Data

    Mini Prince1,*, P. M. Joe Prathap2

    Intelligent Automation & Soft Computing, Vol.35, No.3, pp. 2789-2803, 2023, DOI:10.32604/iasc.2023.029533

    Abstract

    In several fields like financial dealing, industry, business, medicine, et cetera, Big Data (BD) has been utilized extensively, which is nothing but a collection of a huge amount of data. However, it is highly complicated along with time-consuming to process a massive amount of data. Thus, to design the Distribution Preserving Framework for BD, a novel methodology has been proposed utilizing Manhattan Distance (MD)-centered Partition Around Medoid (MD–PAM) along with Conjugate Gradient Artificial Neural Network (CG-ANN), which undergoes various steps to reduce the complications of BD. Firstly, the data are processed in the pre-processing phase by

    More >

  • Open Access

    ARTICLE

    Prototypical Network Based on Manhattan Distance

    Zengchen Yu1, Ke Wang2,*, Shuxuan Xie1, Yuanfeng Zhong1, Zhihan Lv3

    CMES-Computer Modeling in Engineering & Sciences, Vol.131, No.2, pp. 655-675, 2022, DOI: 10.32604/cmes.2022.019612

    Abstract Few-shot Learning algorithms can be effectively applied to fields where certain categories have only a small amount of data or a small amount of labeled data, such as medical images, terrorist surveillance, and so on. The Metric Learning in the Few-shot Learning algorithm is classified by measuring the similarity between the classified samples and the unclassified samples. This paper improves the Prototypical Network in the Metric Learning, and changes its core metric function to Manhattan distance. The Convolutional Neural Network of the embedded module is changed, and mechanisms such as average pooling and Dropout are More >

  • Open Access

    ARTICLE

    Performances of K-Means Clustering Algorithm with Different Distance Metrics

    Taher M. Ghazal1,2, Muhammad Zahid Hussain3, Raed A. Said5, Afrozah Nadeem6, Mohammad Kamrul Hasan1, Munir Ahmad7, Muhammad Adnan Khan3,4,*, Muhammad Tahir Naseem3

    Intelligent Automation & Soft Computing, Vol.30, No.2, pp. 735-742, 2021, DOI:10.32604/iasc.2021.019067

    Abstract Clustering is the process of grouping the data based on their similar properties. Meanwhile, it is the categorization of a set of data into similar groups (clusters), and the elements in each cluster share similarities, where the similarity between elements in the same cluster must be smaller enough to the similarity between elements of different clusters. Hence, this similarity can be considered as a distance measure. One of the most popular clustering algorithms is K-means, where distance is measured between every point of the dataset and centroids of clusters to find similar data objects and More >

Displaying 1-10 on page 1 of 4. Per Page