Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (4)
  • Open Access

    ARTICLE

    Liver Tumor Segmentation Based on Multi-Scale and Self-Attention Mechanism

    Fufang Li, Manlin Luo*, Ming Hu, Guobin Wang, Yan Chen

    Computer Systems Science and Engineering, Vol.47, No.3, pp. 2835-2850, 2023, DOI:10.32604/csse.2023.039765

    Abstract Liver cancer has the second highest incidence rate among all types of malignant tumors, and currently, its diagnosis heavily depends on doctors’ manual labeling of CT scan images, a process that is time-consuming and susceptible to subjective errors. To address the aforementioned issues, we propose an automatic segmentation model for liver and tumors called Res2Swin Unet, which is based on the Unet architecture. The model combines Attention-Res2 and Swin Transformer modules for liver and tumor segmentation, respectively. Attention-Res2 merges multiple feature map parts with an Attention gate via skip connections, while Swin Transformer captures long-range dependencies and models the input… More >

  • Open Access

    ARTICLE

    DT-Net: Joint Dual-Input Transformer and CNN for Retinal Vessel Segmentation

    Wenran Jia1, Simin Ma1, Peng Geng1, Yan Sun2,*

    CMC-Computers, Materials & Continua, Vol.76, No.3, pp. 3393-3411, 2023, DOI:10.32604/cmc.2023.040091

    Abstract Retinal vessel segmentation in fundus images plays an essential role in the screening, diagnosis, and treatment of many diseases. The acquired fundus images generally have the following problems: uneven illumination, high noise, and complex structure. It makes vessel segmentation very challenging. Previous methods of retinal vascular segmentation mainly use convolutional neural networks on U Network (U-Net) models, and they have many limitations and shortcomings, such as the loss of microvascular details at the end of the vessels. We address the limitations of convolution by introducing the transformer into retinal vessel segmentation. Therefore, we propose a hybrid method for retinal vessel… More >

  • Open Access

    ARTICLE

    MIA-UNet: Multi-Scale Iterative Aggregation U-Network for Retinal Vessel Segmentation

    Linfang Yu, Zhen Qin*, Yi Ding, Zhiguang Qin

    CMES-Computer Modeling in Engineering & Sciences, Vol.129, No.2, pp. 805-828, 2021, DOI:10.32604/cmes.2021.017332

    Abstract As an important part of the new generation of information technology, the Internet of Things (IoT) has been widely concerned and regarded as an enabling technology of the next generation of health care system. The fundus photography equipment is connected to the cloud platform through the IoT, so as to realize the real-time uploading of fundus images and the rapid issuance of diagnostic suggestions by artificial intelligence. At the same time, important security and privacy issues have emerged. The data uploaded to the cloud platform involves more personal attributes, health status and medical application data of patients. Once leaked, abused… More >

  • Open Access

    ARTICLE

    Stereo Matching Method Based on Space-Aware Network Model

    Jilong Bian1,*, Jinfeng Li2

    CMES-Computer Modeling in Engineering & Sciences, Vol.127, No.1, pp. 175-189, 2021, DOI:10.32604/cmes.2021.014635

    Abstract The stereo matching method based on a space-aware network is proposed, which divides the network into three sections: Basic layer, scale layer, and decision layer. This division is beneficial to integrate residue network and dense network into the space-aware network model. The vertical splitting method for computing matching cost by using the space-aware network is proposed for solving the limitation of GPU RAM. Moreover, a hybrid loss is brought forward to boost the performance of the proposed deep network. In the proposed stereo matching method, the space-aware network is used to calculate the matching cost and then cross-based cost aggregation… More >

Displaying 1-10 on page 1 of 4. Per Page