Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (2)
  • Open Access

    ARTICLE

    Image Inpainting Technique Incorporating Edge Prior and Attention Mechanism

    Jinxian Bai, Yao Fan*, Zhiwei Zhao, Lizhi Zheng

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 999-1025, 2024, DOI:10.32604/cmc.2023.044612

    Abstract Recently, deep learning-based image inpainting methods have made great strides in reconstructing damaged regions. However, these methods often struggle to produce satisfactory results when dealing with missing images with large holes, leading to distortions in the structure and blurring of textures. To address these problems, we combine the advantages of transformers and convolutions to propose an image inpainting method that incorporates edge priors and attention mechanisms. The proposed method aims to improve the results of inpainting large holes in images by enhancing the accuracy of structure restoration and the ability to recover texture details. This method divides the inpainting task… More >

  • Open Access

    ARTICLE

    TC-Fuse: A Transformers Fusing CNNs Network for Medical Image Segmentation

    Peng Geng1, Ji Lu1, Ying Zhang2,*, Simin Ma1, Zhanzhong Tang2, Jianhua Liu3

    CMES-Computer Modeling in Engineering & Sciences, Vol.137, No.2, pp. 2001-2023, 2023, DOI:10.32604/cmes.2023.027127

    Abstract In medical image segmentation task, convolutional neural networks (CNNs) are difficult to capture long-range dependencies, but transformers can model the long-range dependencies effectively. However, transformers have a flexible structure and seldom assume the structural bias of input data, so it is difficult for transformers to learn positional encoding of the medical images when using fewer images for training. To solve these problems, a dual branch structure is proposed. In one branch, Mix-Feed-Forward Network (Mix-FFN) and axial attention are adopted to capture long-range dependencies and keep the translation invariance of the model. Mix-FFN whose depth-wise convolutions can provide position information is… More >

Displaying 1-10 on page 1 of 2. Per Page