Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (4)
  • Open Access


    Liver Tumor Segmentation Based on Multi-Scale and Self-Attention Mechanism

    Fufang Li, Manlin Luo*, Ming Hu, Guobin Wang, Yan Chen

    Computer Systems Science and Engineering, Vol.47, No.3, pp. 2835-2850, 2023, DOI:10.32604/csse.2023.039765

    Abstract Liver cancer has the second highest incidence rate among all types of malignant tumors, and currently, its diagnosis heavily depends on doctors’ manual labeling of CT scan images, a process that is time-consuming and susceptible to subjective errors. To address the aforementioned issues, we propose an automatic segmentation model for liver and tumors called Res2Swin Unet, which is based on the Unet architecture. The model combines Attention-Res2 and Swin Transformer modules for liver and tumor segmentation, respectively. Attention-Res2 merges multiple feature map parts with an Attention gate via skip connections, while Swin Transformer captures long-range dependencies and models the input… More >

  • Open Access


    3D Object Detection with Attention: Shell-Based Modeling

    Xiaorui Zhang1,2,3,4,*, Ziquan Zhao1, Wei Sun4,5, Qi Cui6

    Computer Systems Science and Engineering, Vol.46, No.1, pp. 537-550, 2023, DOI:10.32604/csse.2023.034230

    Abstract LIDAR point cloud-based 3D object detection aims to sense the surrounding environment by anchoring objects with the Bounding Box (BBox). However, under the three-dimensional space of autonomous driving scenes, the previous object detection methods, due to the pre-processing of the original LIDAR point cloud into voxels or pillars, lose the coordinate information of the original point cloud, slow detection speed, and gain inaccurate bounding box positioning. To address the issues above, this study proposes a new two-stage network structure to extract point cloud features directly by PointNet++, which effectively preserves the original point cloud coordinate information. To improve the detection… More >

  • Open Access


    Chinese Q&A Community Medical Entity Recognition with Character-Level Features and Self-Attention Mechanism

    Pu Han1,2, Mingtao Zhang1, Jin Shi3, Jinming Yang4, Xiaoyan Li5,*

    Intelligent Automation & Soft Computing, Vol.29, No.1, pp. 55-72, 2021, DOI:10.32604/iasc.2021.017021

    Abstract With the rapid development of Internet, the medical Q&A community has become an important channel for people to obtain and share medical and health knowledge. Online medical entity recognition (OMER), as the foundation of medical and health information extraction, has attracted extensive attention of researchers in recent years. In order to further improve the research progress of Chinese OMER, LSTM-Att-Med model is proposed in this paper to capture more external semantic features and important information. First, Word2vec is used to generate the character-level vectors with semantic features on the basis of the unlabeled corpus in the medical domain and open… More >

  • Open Access


    Keyphrase Generation Based on Self-Attention Mechanism

    Kehua Yang1,*, Yaodong Wang1, Wei Zhang1, Jiqing Yao2, Yuquan Le1

    CMC-Computers, Materials & Continua, Vol.61, No.2, pp. 569-581, 2019, DOI:10.32604/cmc.2019.05952

    Abstract Keyphrase greatly provides summarized and valuable information. This information can help us not only understand text semantics, but also organize and retrieve text content effectively. The task of automatically generating it has received considerable attention in recent decades. From the previous studies, we can see many workable solutions for obtaining keyphrases. One method is to divide the content to be summarized into multiple blocks of text, then we rank and select the most important content. The disadvantage of this method is that it cannot identify keyphrase that does not include in the text, let alone get the real semantic meaning… More >

Displaying 1-10 on page 1 of 4. Per Page