Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (3)
  • Open Access

    ARTICLE

    BLFM-Net: An Efficient Regional Feature Matching Method for Bronchoscopic Surgery Based on Deep Learning Object Detection

    He Su, Jianwei Gao, Kang Kong*

    CMC-Computers, Materials & Continua, Vol.83, No.3, pp. 4193-4213, 2025, DOI:10.32604/cmc.2025.063355 - 19 May 2025

    Abstract Accurate and robust navigation in complex surgical environments is crucial for bronchoscopic surgeries. This study purposes a bronchoscopic lumen feature matching network (BLFM-Net) based on deep learning to address the challenges of image noise, anatomical complexity, and the stringent real-time requirements. The BLFM-Net enhances bronchoscopic image processing by integrating several functional modules. The FFA-Net preprocessing module mitigates image fogging and improves visual clarity for subsequent processing. The feature extraction module derives multi-dimensional features, such as centroids, area, and shape descriptors, from dehazed images. The Faster R-CNN Object detection module detects bronchial regions of interest and… More >

  • Open Access

    ARTICLE

    MVCE-Net: Multi-View Region Feature and Caption Enhancement Co-Attention Network for Visual Question Answering

    Feng Yan1, Wushouer Silamu2, Yanbing Li1,*

    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 65-80, 2023, DOI:10.32604/cmc.2023.038177 - 08 June 2023

    Abstract Visual question answering (VQA) requires a deep understanding of images and their corresponding textual questions to answer questions about images more accurately. However, existing models tend to ignore the implicit knowledge in the images and focus only on the visual information in the images, which limits the understanding depth of the image content. The images contain more than just visual objects, some images contain textual information about the scene, and slightly more complex images contain relationships between individual visual objects. Firstly, this paper proposes a model using image description for feature enhancement. This model encodes… More >

  • Open Access

    ARTICLE

    Fine-Grained Features for Image Captioning

    Mengyue Shao1, Jie Feng1,*, Jie Wu1, Haixiang Zhang1, Yayu Zheng2

    CMC-Computers, Materials & Continua, Vol.75, No.3, pp. 4697-4712, 2023, DOI:10.32604/cmc.2023.036564 - 29 April 2023

    Abstract Image captioning involves two different major modalities (image and sentence) that convert a given image into a language that adheres to visual semantics. Almost all methods first extract image features to reduce the difficulty of visual semantic embedding and then use the caption model to generate fluent sentences. The Convolutional Neural Network (CNN) is often used to extract image features in image captioning, and the use of object detection networks to extract region features has achieved great success. However, the region features retrieved by this method are object-level and do not pay attention to fine-grained… More >

Displaying 1-10 on page 1 of 3. Per Page