Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (2)
  • Open Access

    ARTICLE

    Classifying Hematoxylin and Eosin Images Using a Super-Resolution Segmentor and a Deep Ensemble Classifier

    P. Sabitha*, G. Meeragandhi

    Intelligent Automation & Soft Computing, Vol.37, No.2, pp. 1983-2000, 2023, DOI:10.32604/iasc.2023.034402

    Abstract Developing an automatic and credible diagnostic system to analyze the type, stage, and level of the liver cancer from Hematoxylin and Eosin (H&E) images is a very challenging and time-consuming endeavor, even for experienced pathologists, due to the non-uniform illumination and artifacts. Albeit several Machine Learning (ML) and Deep Learning (DL) approaches are employed to increase the performance of automatic liver cancer diagnostic systems, the classification accuracy of these systems still needs significant improvement to satisfy the real-time requirement of the diagnostic situations. In this work, we present a new Ensemble Classifier (hereafter called ECNet) to classify the H&E stained… More >

  • Open Access

    ARTICLE

    Visual Saliency Prediction Using Attention-based Cross-modal Integration Network in RGB-D Images

    Xinyue Zhang1, Ting Jin1,*, Mingjie Han1, Jingsheng Lei2, Zhichao Cao3

    Intelligent Automation & Soft Computing, Vol.30, No.2, pp. 439-452, 2021, DOI:10.32604/iasc.2021.018643

    Abstract Saliency prediction has recently gained a large number of attention for the sake of the rapid development of deep neural networks in computer vision tasks. However, there are still dilemmas that need to be addressed. In this paper, we design a visual saliency prediction model using attention-based cross-model integration strategies in RGB-D images. Unlike other symmetric feature extraction networks, we exploit asymmetric networks to effectively extract depth features as the complementary information of RGB information. Then we propose attention modules to integrate cross-modal feature information and emphasize the feature representation of salient regions, meanwhile neglect the surrounding unimportant pixels, so… More >

Displaying 1-10 on page 1 of 2. Per Page