Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (4)
  • Open Access

    ARTICLE

    A Lightweight Multimodal Deep Fusion Network for Face Antis Poofing with Cross-Axial Attention and Deep Reinforcement Learning Technique

    Diyar Wirya Omar Ameenulhakeem*, Osman Nuri Uçan

    CMC-Computers, Materials & Continua, Vol.85, No.3, pp. 5671-5702, 2025, DOI:10.32604/cmc.2025.070422 - 23 October 2025

    Abstract Face antispoofing has received a lot of attention because it plays a role in strengthening the security of face recognition systems. Face recognition is commonly used for authentication in surveillance applications. However, attackers try to compromise these systems by using spoofing techniques such as using photos or videos of users to gain access to services or information. Many existing methods for face spoofing face difficulties when dealing with new scenarios, especially when there are variations in background, lighting, and other environmental factors. Recent advancements in deep learning with multi-modality methods have shown their effectiveness in… More >

  • Open Access

    ARTICLE

    A Tabletop Nano-CT Image Noise Reduction Network Based on 3-Dimensional Axial Attention Mechanism

    Huijuan Fu, Linlin Zhu, Chunhui Wang, Xiaoqi Xi, Yu Han, Lei Li, Yanmin Sun, Bin Yan*

    CMC-Computers, Materials & Continua, Vol.80, No.1, pp. 1711-1725, 2024, DOI:10.32604/cmc.2024.049623 - 18 July 2024

    Abstract Nano-computed tomography (Nano-CT) is an emerging, high-resolution imaging technique. However, due to their low-light properties, tabletop Nano-CT has to be scanned under long exposure conditions, which the scanning process is time-consuming. For 3D reconstruction data, this paper proposed a lightweight 3D noise reduction method for desktop-level Nano-CT called AAD-ResNet (Axial Attention DeNoise ResNet). The network is framed by the U-net structure. The encoder and decoder are incorporated with the proposed 3D axial attention mechanism and residual dense block. Each layer of the residual dense block can directly access the features of the previous layer, which More >

  • Open Access

    ARTICLE

    Image Inpainting Technique Incorporating Edge Prior and Attention Mechanism

    Jinxian Bai, Yao Fan*, Zhiwei Zhao, Lizhi Zheng

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 999-1025, 2024, DOI:10.32604/cmc.2023.044612 - 30 January 2024

    Abstract Recently, deep learning-based image inpainting methods have made great strides in reconstructing damaged regions. However, these methods often struggle to produce satisfactory results when dealing with missing images with large holes, leading to distortions in the structure and blurring of textures. To address these problems, we combine the advantages of transformers and convolutions to propose an image inpainting method that incorporates edge priors and attention mechanisms. The proposed method aims to improve the results of inpainting large holes in images by enhancing the accuracy of structure restoration and the ability to recover texture details. This… More >

  • Open Access

    ARTICLE

    TC-Fuse: A Transformers Fusing CNNs Network for Medical Image Segmentation

    Peng Geng1, Ji Lu1, Ying Zhang2,*, Simin Ma1, Zhanzhong Tang2, Jianhua Liu3

    CMES-Computer Modeling in Engineering & Sciences, Vol.137, No.2, pp. 2001-2023, 2023, DOI:10.32604/cmes.2023.027127 - 26 June 2023

    Abstract In medical image segmentation task, convolutional neural networks (CNNs) are difficult to capture long-range dependencies, but transformers can model the long-range dependencies effectively. However, transformers have a flexible structure and seldom assume the structural bias of input data, so it is difficult for transformers to learn positional encoding of the medical images when using fewer images for training. To solve these problems, a dual branch structure is proposed. In one branch, Mix-Feed-Forward Network (Mix-FFN) and axial attention are adopted to capture long-range dependencies and keep the translation invariance of the model. Mix-FFN whose depth-wise convolutions… More >

Displaying 1-10 on page 1 of 4. Per Page