Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (51)
  • Open Access

    ARTICLE

    Semantic Segmentation of Lumbar Vertebrae Using Meijering U-Net (MU-Net) on Spine Magnetic Resonance Images

    Lakshmi S V V1, Shiloah Elizabeth Darmanayagam1,*, Sunil Retmin Raj Cyril2

    CMES-Computer Modeling in Engineering & Sciences, Vol.142, No.1, pp. 733-757, 2025, DOI:10.32604/cmes.2024.056424 - 17 December 2024

    Abstract Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere. Due to its ability to produce a detailed view of the soft tissues, including the spinal cord, nerves, intervertebral discs, and vertebrae, Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine. The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases. It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of… More >

  • Open Access

    ARTICLE

    A Real-Time Semantic Segmentation Method Based on Transformer for Autonomous Driving

    Weiyu Hao1, Jingyi Wang2, Huimin Lu3,*

    CMC-Computers, Materials & Continua, Vol.81, No.3, pp. 4419-4433, 2024, DOI:10.32604/cmc.2024.055478 - 19 December 2024

    Abstract While traditional Convolutional Neural Network (CNN)-based semantic segmentation methods have proven effective, they often encounter significant computational challenges due to the requirement for dense pixel-level predictions, which complicates real-time implementation. To address this, we introduce an advanced real-time semantic segmentation strategy specifically designed for autonomous driving, utilizing the capabilities of Visual Transformers. By leveraging the self-attention mechanism inherent in Visual Transformers, our method enhances global contextual awareness, refining the representation of each pixel in relation to the overall scene. This enhancement is critical for quickly and accurately interpreting the complex elements within driving scenarios—a fundamental… More >

  • Open Access

    ARTICLE

    PCB CT Image Element Segmentation Model Optimizing the Semantic Perception of Connectivity Relationship

    Chen Chen, Kai Qiao, Jie Yang, Jian Chen, Bin Yan*

    CMC-Computers, Materials & Continua, Vol.81, No.2, pp. 2629-2642, 2024, DOI:10.32604/cmc.2024.056038 - 18 November 2024

    Abstract Computed Tomography (CT) is a commonly used technology in Printed Circuit Boards (PCB) non-destructive testing, and element segmentation of CT images is a key subsequent step. With the development of deep learning, researchers began to exploit the “pre-training and fine-tuning” training process for multi-element segmentation, reducing the time spent on manual annotation. However, the existing element segmentation model only focuses on the overall accuracy at the pixel level, ignoring whether the element connectivity relationship can be correctly identified. To this end, this paper proposes a PCB CT image element segmentation model optimizing the semantic perception… More >

  • Open Access

    ARTICLE

    ConvNeXt-UperNet-Based Deep Learning Model for Road Extraction from High-Resolution Remote Sensing Images

    Jing Wang1,2,*, Chen Zhang1, Tianwen Lin1

    CMC-Computers, Materials & Continua, Vol.80, No.2, pp. 1907-1925, 2024, DOI:10.32604/cmc.2024.052597 - 15 August 2024

    Abstract When existing deep learning models are used for road extraction tasks from high-resolution images, they are easily affected by noise factors such as tree and building occlusion and complex backgrounds, resulting in incomplete road extraction and low accuracy. We propose the introduction of spatial and channel attention modules to the convolutional neural network ConvNeXt. Then, ConvNeXt is used as the backbone network, which cooperates with the perceptual analysis network UPerNet, retains the detection head of the semantic segmentation, and builds a new model ConvNeXt-UPerNet to suppress noise interference. Training on the open-source DeepGlobe and CHN6-CUG… More >

  • Open Access

    ARTICLE

    Semantic Segmentation and YOLO Detector over Aerial Vehicle Images

    Asifa Mehmood Qureshi1, Abdul Haleem Butt1, Abdulwahab Alazeb2, Naif Al Mudawi2, Mohammad Alonazi3, Nouf Abdullah Almujally4, Ahmad Jalal1, Hui Liu5,*

    CMC-Computers, Materials & Continua, Vol.80, No.2, pp. 3315-3332, 2024, DOI:10.32604/cmc.2024.052582 - 15 August 2024

    Abstract Intelligent vehicle tracking and detection are crucial tasks in the realm of highway management. However, vehicles come in a range of sizes, which is challenging to detect, affecting the traffic monitoring system’s overall accuracy. Deep learning is considered to be an efficient method for object detection in vision-based systems. In this paper, we proposed a vision-based vehicle detection and tracking system based on a You Look Only Once version 5 (YOLOv5) detector combined with a segmentation technique. The model consists of six steps. In the first step, all the extracted traffic sequence images are subjected… More >

  • Open Access

    ARTICLE

    ED-Ged: Nighttime Image Semantic Segmentation Based on Enhanced Detail and Bidirectional Guidance

    Xiaoli Yuan, Jianxun Zhang*, Xuejie Wang, Zhuhong Chu

    CMC-Computers, Materials & Continua, Vol.80, No.2, pp. 2443-2462, 2024, DOI:10.32604/cmc.2024.052285 - 15 August 2024

    Abstract Semantic segmentation of driving scene images is crucial for autonomous driving. While deep learning technology has significantly improved daytime image semantic segmentation, nighttime images pose challenges due to factors like poor lighting and overexposure, making it difficult to recognize small objects. To address this, we propose an Image Adaptive Enhancement (IAEN) module comprising a parameter predictor (Edip), multiple image processing filters (Mdif), and a Detail Processing Module (DPM). Edip combines image processing filters to predict parameters like exposure and hue, optimizing image quality. We adopt a novel image encoder to enhance parameter prediction accuracy by More >

  • Open Access

    ARTICLE

    An Improved UNet Lightweight Network for Semantic Segmentation of Weed Images in Corn Fields

    Yu Zuo1, Wenwen Li2,*

    CMC-Computers, Materials & Continua, Vol.79, No.3, pp. 4413-4431, 2024, DOI:10.32604/cmc.2024.049805 - 20 June 2024

    Abstract In cornfields, factors such as the similarity between corn seedlings and weeds and the blurring of plant edge details pose challenges to corn and weed segmentation. In addition, remote areas such as farmland are usually constrained by limited computational resources and limited collected data. Therefore, it becomes necessary to lighten the model to better adapt to complex cornfield scene, and make full use of the limited data information. In this paper, we propose an improved image segmentation algorithm based on unet. Firstly, the inverted residual structure is introduced into the contraction path to reduce the… More >

  • Open Access

    ARTICLE

    SGT-Net: A Transformer-Based Stratified Graph Convolutional Network for 3D Point Cloud Semantic Segmentation

    Suyi Liu1,*, Jianning Chi1, Chengdong Wu1, Fang Xu2,3,4, Xiaosheng Yu1

    CMC-Computers, Materials & Continua, Vol.79, No.3, pp. 4471-4489, 2024, DOI:10.32604/cmc.2024.049450 - 20 June 2024

    Abstract In recent years, semantic segmentation on 3D point cloud data has attracted much attention. Unlike 2D images where pixels distribute regularly in the image domain, 3D point clouds in non-Euclidean space are irregular and inherently sparse. Therefore, it is very difficult to extract long-range contexts and effectively aggregate local features for semantic segmentation in 3D point cloud space. Most current methods either focus on local feature aggregation or long-range context dependency, but fail to directly establish a global-local feature extractor to complete the point cloud semantic segmentation tasks. In this paper, we propose a Transformer-based… More >

  • Open Access

    ARTICLE

    CrossFormer Embedding DeepLabv3+ for Remote Sensing Images Semantic Segmentation

    Qixiang Tong, Zhipeng Zhu, Min Zhang, Kerui Cao, Haihua Xing*

    CMC-Computers, Materials & Continua, Vol.79, No.1, pp. 1353-1375, 2024, DOI:10.32604/cmc.2024.049187 - 25 April 2024

    Abstract High-resolution remote sensing image segmentation is a challenging task. In urban remote sensing, the presence of occlusions and shadows often results in blurred or invisible object boundaries, thereby increasing the difficulty of segmentation. In this paper, an improved network with a cross-region self-attention mechanism for multi-scale features based on DeepLabv3+ is designed to address the difficulties of small object segmentation and blurred target edge segmentation. First, we use CrossFormer as the backbone feature extraction network to achieve the interaction between large- and small-scale features, and establish self-attention associations between features at both large and small… More >

  • Open Access

    ARTICLE

    Automatic Road Tunnel Crack Inspection Based on Crack Area Sensing and Multiscale Semantic Segmentation

    Dingping Chen1, Zhiheng Zhu2, Jinyang Fu1,3, Jilin He1,*

    CMC-Computers, Materials & Continua, Vol.79, No.1, pp. 1679-1703, 2024, DOI:10.32604/cmc.2024.049048 - 25 April 2024

    Abstract The detection of crack defects on the walls of road tunnels is a crucial step in the process of ensuring travel safety and performing routine tunnel maintenance. The automatic and accurate detection of cracks on the surface of road tunnels is the key to improving the maintenance efficiency of road tunnels. Machine vision technology combined with a deep neural network model is an effective means to realize the localization and identification of crack defects on the surface of road tunnels. We propose a complete set of automatic inspection methods for identifying cracks on the walls… More >

Displaying 1-10 on page 1 of 51. Per Page