Open Access
REVIEW
X-Ray Techniques for Defect Detection in Industrial Components and Materials: A Review
1 School of Software Engineering, Shenyang University of Technology, Shenyang, 110870, China
2 National Key Laboratory of Advanced Casting Technologies, Shenyang, 110022, China
3 China Academy of Machine Shenyang Research Institute of Foundry Company Ltd., Shenyang, 110022, China
4 School of Mechanical Engineering & Automation, Northeastern University, Shenyang, 110819, China
* Corresponding Authors: Kechen Song. Email: ; Han Yu. Email:
Computers, Materials & Continua 2025, 85(3), 4173-4201. https://doi.org/10.32604/cmc.2025.070906
Received 27 July 2025; Accepted 09 September 2025; Issue published 23 October 2025
Abstract
With the growing demand for higher product quality in manufacturing, X-ray non-destructive testing has found widespread application not only in industrial quality control but also in a wide range of industrial applications, owing to its unique capability to penetrate materials and reveal both internal and surface defects. This paper presents a systematic review of recent advances and current applications of X-ray-based defect detection in industrial components. It begins with an overview of the fundamental principles of X-ray imaging and typical inspection workflows, followed by a review of classical image processing methods for defect detection, segmentation, and classification, with particular emphasis on their limitations in feature extraction and robustness. The focus then shifts to recent developments in deep learning techniques—particularly convolutional neural networks, object detection, and segmentation algorithms—and their innovative applications in X-ray defect analysis, which demonstrate substantial advantages in terms of automation and accuracy. In addition, the paper summarizes newly released public datasets and performance evaluation metrics reported in recent years. Finally, it discusses the current challenges and potential solutions in X-ray-based defect detection for industrial components, outlines key directions for future research, and highlights the practical relevance of these advances to real-world industrial applications.Keywords
Nondestructive Testing (NDT) technologies play an irreplaceable role in industrial quality control due to their ability to assess materials without causing damage. Common NDT methods include radiographic testing, ultrasonic testing, magnetic particle testing, eddy current testing, liquid penetrant testing, leak testing, and visual inspection [1]. Each of these methods has its unique applications and advantages. Among these, ultrasonic and magnetic particle testing are commonly used to assess the size and location of defects in castings [2]. Recent studies have attempted to implement the EfficientDet architecture for automatic defect detection in ultrasonic images of stainless steel. This method effectively mitigates the issues of low efficiency and human error in manual analysis [3]. However, challenges remain, such as the lack of consistency evaluation with visual inspections, the absence of POD (Probability of Detection) analysis, and inadequate model deployment adaptability, which indicate that ultrasonic testing still faces challenges in being widely adopted in complex industrial scenarios. Infrared thermography and ultrasonic microscopy are suitable for the detection of material’s local microstructures [4]. Optical microscopy (OM), as the core tool for surface defect analysis, is primarily used for high-precision morphological characterization and rapid screening [5]. Optical microscopes offer high-resolution imaging of surface defects, particularly effective for analyzing micron-level defects such as scratches, porosity, and inclusions. They are widely used for surface inspection of castings, metals, plastics, and other materials. Electron microscopy (EM) provides even higher resolution and is suitable for analyzing the microstructure of samples, revealing the details of metals, ceramics, and other materials. It is widely applied to analyze fine particles, cracks, and other minute defects [6]. In terms of internal defect detection in castings, X-ray microscopy can be used to visualize the internal structure of materials and provide three-dimensional structural information, especially for complex structures [7]. Radiographic NDT, based on X-ray imaging [8], offers advantages such as strong penetration, fast imaging speed, and the ability to detect internal defects without damaging the test object. This method has become a crucial tool for detecting internal defects in castings, welds, and other complex structures.
X-ray inspection has evolved through several key stages, including manual interpretation, digital radiography, image processing, machine learning, and, more recently, deep learning. The major milestones in the development of X-ray-based defect detection technologies are illustrated in Fig. 1.

Figure 1: Key milestones in the evolution of industrial X-ray defect detection techniques. U-Net [1], SSD [2], EfficientDet [3], Chan-Vese [8], MFEEC-BH, SVM, Squeeze Net, Fast RCNN, Faster RCNN [9–13], Watershed, Random Forest, YOLO, Retina Net, RCNN [14–18], STMANet, VGG, AlexNet, GAN, ResNet [19–23], Cascade Mask RCNN [24], Cycle GAN [25], SegNet [26]
Visual inspection methods mainly rely on human experience and subjective judgment, which suffer from low efficiency, high labor intensity, limited accuracy, and poor real-time performance [27]. In traditional radiographic testing, data are primarily stored on film, which must be manually scanned and annotated before being converted into digital images. This process is not only inefficient but also prone to errors, severely hindering the training of intelligent algorithms [13]. In contrast, digital radiography converts X-ray energy into digital signals through imaging panels, thereby improving information acquisition efficiency and facilitating the widespread application of image processing algorithms, although its spatial resolution still remains challenging to surpass that of traditional films [28].
Currently, X-ray–based weld defect detection methods are mainly divided into traditional approaches and deep learning approaches [29]. Traditional methods can be further categorized into model-driven, spectral, and statistical techniques. Model-driven approaches (e.g., edge detection [30] and shape-matching algorithms) achieve high accuracy when dealing with defects of well-defined geometric structures but are limited when addressing complex textures or irregular shapes. Spectral approaches (e.g., infrared imaging and ultrasonic imaging [4]) perform well for certain materials but are constrained by imaging principles and equipment limitations. Statistical approaches, such as texture analysis [31] and histogram equalization [9], can effectively model image features in homogeneous materials but often fail to remain robust under complex backgrounds or high noise conditions. Traditional machine learning-based methods, such as SVM [10], typically combine handcrafted features with classifiers to identify defects. However, these methods are highly sensitive to defect size and noise, are not end-to-end models, and rely heavily on visual feature extraction, which results in poor generalization [32]. In recent years, convolutional neural networks (CNNs) have emerged as the mainstream technology for X-ray image classification, detection, and segmentation. By autonomously extracting image features from datasets, CNNs possess stronger representational capability, eliminate the need for visual feature engineering, and effectively overcome the limitations of traditional methods.
Building on the above background, this review summarizes recent progress in X-ray-based defect detection for industrial components, with a particular focus on three key tasks: defect detection, classification, and segmentation. The structure of the paper is as follows: Section 2 introduces the fundamental principles of X-ray imaging and system architecture, and provides an overview of publicly available datasets and defect types in recent industrial X-ray imaging studies. Section 3 reviews traditional image processing and machine learning approaches for defect detection in X-ray images. Section 4 focuses on recent developments in deep learning-based methods. Section 5 presents commonly used evaluation metrics. Finally, Section 6 concludes the paper and outlines directions for our future work.
2 X-Ray Imaging and Defect Datasets for Industrial Applications
2.1 X-Ray Imaging System Overview
Depending on the imaging modality, X-ray systems are generally categorized into three types: computed tomography (CT), digital radiography (DR), and computed radiography (CR) [33]. CT systems reconstruct three-dimensional images of the inspected object through multi-angle scanning, providing richer depth information and higher accuracy for detailed analysis of complex defects. DR systems employ digital flat-panel detectors to capture transmitted X-rays and generate two-dimensional grayscale images, making them ideal for real-time online inspection. CR technology is similar in principle to traditional X-ray imaging, but it replaces film with an imaging plate (IP) coated with phosphor [34]. Compared with conventional film, CR offers significant advantages in image quality, processing speed, and digital storage; however, it is somewhat inferior to DR systems in terms of real-time performance and processing efficiency.
According to ISO 17636, a typical industrial X-ray inspection system consists of four core components: an X-ray source, a digital detector array (DDA), a mechanical transmission system, and a computer-based image processing unit [35]. The principle of real-time imaging is as follows: during inspection, the X-ray tube emits a collimated ionizing beam that passes through the specimen (e.g., a casting). The beam intensity is attenuated depending on the material thickness and the presence of internal pores, inclusions, or discontinuities. After transmission, the attenuated beam is captured by the imaging device—typically a DDA—and subsequently processed by the image processing unit to generate digital images. These images provide essential information about the internal structure of the specimen, where different grayscale values correspond to variations in local density and thickness.
Beyond DDAs, modern industrial X-ray imaging also employs alternative detection techniques. For instance, CMOS line-scan sensors [36] convert X-ray signals into electrical signals, enabling high-resolution imaging with advantages such as fast acquisition speed and low power consumption. Lens-coupled CMOS detectors [37] utilize a scintillator to convert X-rays into visible light, which is then focused onto a CMOS camera via a lens system. This configuration produces high-quality, low-noise images, making it particularly suitable for precision inspection. In addition, CCD cameras coupled with image intensifiers are also applied in certain systems. The digital images obtained through these various techniques allow for classification of castings into defect-free and defective samples, and further enable detailed analysis of defect location, morphology, and size.
High-quality datasets are a critical foundation driving the continuous development of defect detection algorithms for X-ray images, especially indispensable in deep learning–based automated inspection systems. The GDX-ray dataset [38] is currently one of the most widely used public X-ray image datasets in the industrial defect detection field, covering various industrial objects such as castings, welds, and printed circuit boards. To facilitate research and applications in this area, we systematically review and summarize publicly available X-ray image datasets for defect detection of industrial components. Some of the datasets are shown in Fig. 2. Some of these datasets were released recently and hold significant research value and practical relevance. Each dataset entry includes its name, access link, and corresponding references to aid researchers in further exploration and utilization, thereby providing a solid foundation for subsequent model training, evaluation, and method comparison. A summary of the datasets in the field of X-ray industrial product defect detection is shown in Table 1.

Figure 2: X-ray image datasets for surface defect detection of industrial components
In the industrial production process, X-ray nondestructive testing technology is widely used in the quality monitoring of various materials and products, and is particularly good at revealing invisible defects inside the materials. The types of defects in different industrial scenarios vary, mainly including weld defects, casting defects, rail defects, composite material defects, and anomalies inside structures such as cables and automotive parts. The following describes these typical defects in combination with literature and practical applications.
The quality of welding seam directly affects the overall structural safety. Common defects in X-ray images include pores, lack of penetration (LOP), lack of fusion (LOF), slag inclusion, cracks, and tungsten inclusions [47]. Cracks generally exhibit low contrast and irregular shapes, often appearing as thin bright lines or irregular bright bands, with fine branches at their ends. LOP is primarily found in the center of the weld, whereas LOF usually occurs at the weld edges. Both types are linearly distributed and exhibit subtle intensity variations, making them difficult to distinguish visually. Stripy defects typically have an aspect ratio greater than 3 and relatively uniform intensity, so LOP and LOF share highly similar imaging characteristics in X-ray images. In contrast, round defects such as pores and slag inclusions are easier to identify due to their regular shape and higher image contrast [48]. In pipeline welding applications, defects such as slag inclusions and incomplete fusion (ICF) are also common [12]. Slag inclusions are irregularly shaped, have relatively high grayscale values, and blurred edges, often located in the weld center or at the interface with the base material. ICF refers to areas of poor fusion with a lower density than the surrounding metal, allowing X-rays to pass through more easily; thus, these regions appear as brighter linear areas in X-ray images.
During the solidification process, castings frequently develop various internal defects due to gas entrapment, metal shrinkage, or foreign material inclusion [49]. Gas porosity (GP) forms from trapped gases, typically appearing as circular or irregular black spots. Shrinkage cavities (SC) result from cooling contraction, creating irregular voids predominantly found in thicker sections [50]. At the microscopic scale, shrinkage can form spongy shrinkage (SS), characterized by dense small pores resembling sponge-like structures. Filamentary shrinkage manifests as elongated fissures with characteristic network-like distribution patterns. When shrinkage develops along grain growth directions, it forms dendritic shrinkage, exhibiting finer and more dispersed linear structures.
Due to their multi-layered structure [51], strong anisotropic properties, and sensitive interlayer bonding, composite materials exhibit more complex and diverse defect types. Common defects include interlayer delamination, voids, inclusions, and cracks. Among these, interlayer delamination typically manifests as localized low-intensity bands distributed along the material’s structural direction in X-ray images. Voids and inclusions predominantly appear as regular or irregular regions of abnormal grayscale, visible as regular circular spots or scattered particles. Cracks generally extend along fiber directions, presenting as fine linear black streaks that often intermingle with internal structural textures, making identification challenging. In precision industrial applications such as automotive manufacturing [52], critical components like engine cylinder blocks and braking systems are susceptible to various internal defects during production, primarily including bubbles and cracks. Bubble defects commonly originate from unvented gases, appearing as regular black spots in images. Cracks predominantly occur in stress concentration areas, exhibiting elongated morphology with varying orientations.
3 Traditional Machine Learning
Traditional image processing methods have played a central role in the evolution of X-ray industrial image defect detection. Although deep learning has gained significant momentum in recent years, traditional approaches remain widely adopted in engineering practice due to their low computational cost, ease of implementation, and strong interpretability. They are particularly suitable for scenarios with limited data, real-time processing requirements, and hardware constraints. These methods continue to demonstrate strong vitality in tasks such as image enhancement, edge detection, segmentation, and feature extraction. To systematically analyze the role of traditional techniques in defect detection, this section will examine three key aspects: image preprocessing, defect detection and extraction, and defect characterization and classification. Fig. 3 illustrates the functional classification of X-ray industrial defect detection within traditional machine vision.

Figure 3: Classification of traditional machine learning methods
Image preprocessing is a critical step in the defect detection pipeline, aimed at enhancing the visibility of defects in X-ray images and providing a stable foundation for subsequent segmentation and detection. Depending on the specific processing objectives, traditional preprocessing techniques can be broadly categorized into two groups: image enhancement and image denoising.
In terms of image enhancement, commonly used techniques include histogram equalization, Retinex enhancement, and gray-level stretching. Mahmoudi et al. [53] introduced a homomorphic filter that effectively improves image brightness and contrast while avoiding noise amplification. Mohamed et al. [54] applied gray-level stretching to enhance underexposed radiographic weld images, thereby improving image contrast. Movafeghi et al. [55] proposed three image enhancement methods based on non-local regularization, which significantly improved the detectability of weld defects. However, in practical applications, it is still constrained by the uncertainty of noise levels and the difficulty of parameter optimization. In addition, high-pass filtering is often employed to enhance high-frequency components such as edges and fine structures. Typically implemented via frequency-domain subtraction, high-pass filtering removes low-frequency components obtained from low-pass filtering while preserving or emphasizing high-frequency details. Based on this principle [56], a frequency-domain X-ray weld defect image enhancement method was proposed. By comparing the performance of three high-pass filters, it was found that the ideal high-pass filter introduces noticeable sharp discontinuities in the image, whereas the Gaussian high-pass filter achieves a smoother intensity transition. This analysis also explains why certain automated defect detection methods perform poorly on low-quality images.
Noise in X-ray images mainly arises during the inspection process and signal transmission. The primary sources include quantum noise, which originates from the statistical fluctuation of X-ray photons; granular noise caused by electronic emissions; and Gaussian noise introduced during acquisition and transmission. The presence of noise can severely interfere with the defect detection performance of network models [57]. Image filtering aims to eliminate noise as much as possible while preserving edge and detail information. Ajmi et al. [58] compared Gaussian and median filtering and found Gaussian filtering to be more suitable for radiographic weld images. Moreover, the proposed method achieves a higher Peak Signal-to-Noise Ratio (PSNR) compared to the Otsu-based approach. Shao et al. [59] employed a combined median and mean filtering approach to reduce Gaussian noise in X-ray images, addressing the issue that traditional single-image detection methods often struggle to distinguish low-contrast defects from noise-induced false positives. Malarvel et al. [60] proposed an improved anisotropic diffusion model for denoising and defect detection, which is particularly effective for low-contrast images. Yahaghi et al. [61] applied interlaced multistage bilateral filtering and wavelet thresholding to X-ray images, thereby improving the signal-to-noise ratio in weld regions. It is important to note that these techniques often rely on the continuity of pixel value distributions. When the contrast between defects and the background is low, or when the image contains complex textures, they may lead to the loss of critical details or unintended enhancement. Although existing studies have demonstrated the successful application of automated detection methods in industrial weld inspection, their performance remains influenced by imaging conditions, parameter settings, and noise levels, and may degrade in complex scenarios. The advantages and disadvantages of each image filtering algorithm are summarized in Table 2.

3.2 Defect Detection and Segmentation
The key task of defect detection and segmentation is to separate potential defect regions from complex backgrounds. Initial extraction of suspected defect areas is typically performed through edge detection or threshold segmentation, followed by refined segmentation using connected component analysis and morphological operations to obtain well-defined defect contours.
Ramírez et al. [30] proposed a method combining adaptive thresholding with Canny edge detection, which enabled highly accurate measurement of pore areas. The Watershed segmentation method, a classical approach grounded in morphology and topology, conceptualizes a grayscale image as a topographic surface, where region boundaries are determined through a simulated “flooding” process. In industrial X-ray imaging, Watershed segmentation is particularly valuable for separating adjacent or boundary-blurred defect regions. Alaknanda et al. [14] developed a two-stage watershed segmentation algorithm that not only alleviated the issue of over-segmentation but also produced boundaries with minimal deviation from their true locations. This approach successfully detected defects such as slag inclusions and wormhole-type weld flaws, providing reasonably accurate defect contours. Furthermore, Wang et al. [62] introduced an improved watershed segmentation method that incorporates dynamic combination rules, effectively suppressing over-segmentation while demonstrating enhanced robustness against noise.
The Otsu thresholding method is a classical global thresholding algorithm, which determines the optimal segmentation threshold by maximizing inter-class variance, thereby separating the target region from the background without manual intervention. However, its performance may degrade when the gray-level histogram is unimodal or the contrast between the target and background is extremely low. To overcome these limitations, Otsu is often combined with filtering, histogram equalization, or region-growing techniques to enhance segmentation accuracy. Tian et al. [63] proposed an Otsu-based dynamic thresholding approach for weld extraction, which integrates waveform contour analysis and large-scale smoothing lag techniques, and further applies the Sobel operator for edge detection, leading to significant performance improvements. Shen et al. [64] developed a defect detection framework for Through-Silicon via (TSV) using a self-organizing map (SOM) neural network combined with Otsu thresholding, where Otsu was employed for the qualitative localization of voids inside TSVs. Tang et al. [9] introduced a boundary histogram-based maximum fuzzy entropy criterion segmentation method (MFEEC-BH), which successfully extracted internal defects in castings. Moreover, reference [65] systematically compared several segmentation methods, including Otsu, adaptive thresholding, median filtering, and spatial smoothing-based segmentation, to address the challenges of low contrast and complex gray-level variations in aluminum casting X-ray images. Experimental results demonstrated that spatial smoothing-based segmentation achieved the best performance in detecting defects of various types and sizes. The MFEEC-BH method is sensitive to small defects but computationally intensive, whereas the spatial smoothing approach can handle large defects but lacks precise edge localization.
The Chan–Vese model is a classical variational image segmentation method based on energy functional minimization, particularly suitable for handling structures with blurred boundaries and low contrast, such as voids and pores. Suhaila et al. [8] applied this model to X-ray weld defect detection and achieved effective boundary segmentation of defects. Ramou [31] further improved the Chan–Vese model by employing a multiphase extension of the Mumford–Shah functional, enabling the segmentation results to simultaneously capture defect boundaries, size, and texture information. The Mumford–Shah model decomposes an image into several regions and approximates each region with a smooth function, thereby achieving optimal segmentation [66]. Building on traditional optimization, researchers have explored the integration of the Chan–Vese model with data-driven approaches. Abdelkader et al. [67] combined fuzzy C-means (FCM) clustering with the Chan–Vese model, significantly improving the accuracy of weld defect segmentation. In a subsequent study [68], the integration of region-of-interest (ROI) extraction and wavelet-based denoising further enhanced the model’s robustness when applied to low-quality X-ray images. In addition, Radi et al. [69] proposed a filter-based fast Chan–Vese segmentation approach, which can accurately segment defects of various shapes without the need for complex feature extraction or model training.
3.3 Feature Extraction and Machine Learning
In traditional image processing frameworks, automatic defect detection often relies on a combination of explicit feature extraction and machine learning classifiers. Researchers have constructed multidimensional image features—such as grayscale, texture, and frequency-domain descriptors—and combined them with conventional classification algorithms like SVM and random forests (RF) to achieve defect detection in X-ray images of various industrial components. For example, Shao et al. [70] proposed a real-time X-ray weld defect detection method based on SVM, using median-filtered background subtraction and adaptive thresholding for defect segmentation, effectively reducing both miss rates and false positives. Rajesh V. Patil et al. [71] introduced a hybrid detection framework that integrates SVM and artificial neural networks(ANN)to detect fine defects; however, misclassification remains an issue for confusing types such as porosity and lack of fusion. Wu et al. [72] conducted a comparative evaluation of Gabor, histograms of oriented gradients (HOG) and local binary pattern (LBP) feature extraction methods combined with eight machine learning models. Their results showed that LBP features combined with a gradient boosting classifier achieved the best performance.
Additionally, researchers have explored other combinations of texture statistical features and classifiers. Dong et al. [15] developed a defect detection system based on RF, which first locates the weld centerline via regression and then applies a classifier to identify defects, achieving a detection rate of 80% on X-ray images. Malarvel et al. [47] proposed an X-ray weld defect detection and classification method based on multi-class SVM (MSVM), which demonstrated high accuracy, especially in identifying small defects. Cozma et al. [16] evaluated defect detection in tire X-ray images and found that, despite the advances of deep learning models such as YOLOv8, traditional feature engineering methods (e.g., LBP, GLCM) remain competitive in certain scenarios. In particular, random forest combined with optimized Co-occurrence Matrix (GLCM) features achieved an accuracy of 84%, though anisotropic textures continue to pose challenges for CNN-based approaches. Ramana et al. [73] used GLCM-based texture features combined with classifiers such as K-Nearest Neighbors (KNN), SVM, and RF to detect subsurface defects. Despite limitations in accuracy and robustness, traditional feature extraction combined with machine learning remains valuable, particularly in data-scarce or high-interpretability industrial scenarios. The literature based on traditional machine learning methods is summarized in Table 3.

4 Defect Detection Algorithm Based on Deep Learning
With the increasing complexity and diversity of X-ray defect images, handcrafted features have become inadequate for accurate analysis. Deep learning models, with their powerful feature learning capabilities, have gradually emerged as the mainstream approach. By constructing multi-layer neural network architectures, these models can automatically learn multi-level features—such as texture, shape, and semantic information—from large volumes of images, significantly improving detection accuracy, generalization, and robustness. Based on functional categorization, defect detection tasks can be divided into three main types: defect detection, defect classification, and defect segmentation. This section provides a focused review of recent developments and representative achievements in deep learning approaches across these three areas. A detailed classification based on deep learning methods is shown in Fig. 4.

Figure 4: Defect detection algorithms based on deep learning
Defect detection refers to determining whether defects are present in an X-ray image and locating their positions. Models designed for defect detection can generally be categorized into single-stage and two-stage architectures. Single-stage architectures include YOLO, Single Shot MultiBox Detector (SSD), RetinaNet, and EfficientDet. The YOLO series treats the detection task as an end-to-end regression problem, enabling real-time detection of both the locations and categories of defects. SSD identifies objects of varying sizes by detecting features at multiple scales. RetinaNet introduces a Focal Loss function to address the problem of class imbalance in the training data, while EfficientDet utilizes a Bidirectional Feature Pyramid Network (BiFPN) structure to enhance feature fusion efficiency. Two-stage models include R-CNN and its variants, such as Faster R-CNN and Cascade R-CNN. Although these models tend to be slower, they offer higher localization accuracy and better performance in detecting small defects.
Yang et al. [41] combined YOLOv5 with Mosaic data augmentation to achieve real-time detection of spiral pipeline weld seams, reaching an accuracy of 97.8%. Liu et al. [74] designed an Efficient Feature Extraction (EFE) module and a Reinforced Multi-scale Feature (RMF) module to construct a lightweight YOLO network, enhancing both detection speed and accuracy. Zhang et al. [75] used YOLOv5-s as the baseline and integrated specialized modules, including the deep feature extraction module (DFEM) and the convolutional spatial pyramid pooling module (GCSPPF). Furthermore, they proposed a self-optimization strategy termed the convolutional optimization algorithm (COA), which demonstrated that their LF-YOLO model was able to substantially reduce parameter size and computational cost while maintaining high detection performance. Su et al. [76] proposed an improved YOLOv5-based model to handle pipeline weld images with low contrast and large scale variation. The model integrates dual attention mechanisms, including Efficient Multi-Scale Attention (EMA) and Efficient Channel Attention (ECA), to enhance the detection of small defects. Zuo et al. [77] introduced an active learning mechanism to iteratively optimize the YOLO model, effectively improving its adaptability to complex defect types.
Research on Faster R-CNN has primarily focused on structural enhancements and feature augmentation to improve detection accuracy and sensitivity to small defects. M. Ferguson et al. [13] enhanced the Faster R-CNN architecture by integrating VGG, ResNet, and feature cascade strategies, achieving better detection performance on metal casting defects. Du et al. [35] adopted a Faster R-CNN framework with a Feature Pyramid Networks (FPN) to detect defects in X-ray images of castings, which improved the ability to detect small defects but lacked capability for defect-type classification. Wang et al. [17] enhanced the RetinaNet framework by incorporating the Focal Loss function to address the class imbalance between foreground and background samples. Based on 6714 annotated images, experimental results demonstrated that the method achieved mAP values of 0.76, 0.79 and 0.92 for detecting pores, lack of fusion, and tungsten inclusions, respectively, significantly outperforming traditional approaches such as SVM that rely on handcrafted features. Ji et al. [12] further integrated the SPAM module into the traditional Faster R-CNN architecture, resulting in a 4.0% improvement in mAP. Liu et al. [18] proposed the AF-RCNN framework, incorporating the efficient convolutional attention module (ECAM) to enhance the model’s ability to learn from small and low-contrast defects, achieving a mAP of 85.4%. By comparing Faster R-CNN and SSD models, the study demonstrates that two-stage detectors perform better in small object detection. A. García Pérez et al. [78] developed an automatic defect detection system based on RetinaNet, which combined Feature Pyramid Networks (FPN) and image upsampling techniques. This method eliminated the need for complex segmentation and maintained a simpler model architecture for aluminum casting X-ray image analysis.
To address common challenges in industrial inspection such as class imbalance and poor image quality, several researchers have proposed improvements from the perspectives of feature learning and image enhancement, aiming to boost detection performance in complex scenarios. Liu et al. [29] tackled the problem of class imbalance in weld defect detection by proposing a Hybrid Feature Learning (HFL) approach that combines Base Class Learning (BCL) and Cross-Class Learning (CCL). This method demonstrated robust performance across three different datasets. Zuo et al. [79] introduced grayscale contrast enhancement and an adaptive Feature Pyramid Networks (FPN), which significantly improved the detection accuracy of complex weld seam images in X-ray imaging; however, the model still struggled to accurately estimate defect size. Cui et al. [42] proposed a new dataset for X-ray image inspection, named SSPWX-ray, along with a rapid unsupervised screening framework for weld defects (RSM). The framework incorporates a memory-aware transformer encoder and dual decoders to effectively suppress background noise, highlight small defect regions, and enhance detection capabilities in complex environments.
In addition, some methods integrate defect detection with image preprocessing or segmentation functions. For example, Fu et al. [80] proposed an improved Cascade Mask R-CNN framework for processing casting DR images. By incorporating preprocessing operations, their method significantly enhanced the detection performance of shrinkage porosity defects. Wang et al. [81] introduced the Zoom-In Object Targeting (ZIOT) model, which enhances the size-awareness of defect targets and improved detection AP from 57.0% to 67.0%. Zuo et al. [82] developed I2D-Net to address challenges in X-ray images such as low contrast and densely distributed small defects. The model incorporates frequency-domain filtering enhancement and a parallel detection network, with the advantage of significantly improving small defect detection performance through an iterative integrated prediction mechanism. However, it entails higher computational cost and requires more powerful hardware support. Zuo et al. [19] further proposed the STMA-Net, a multi-scale attention network based on a Spatial Transformation Attention Network (STAN) and a Multi-level Attention Feature Fusion Network (MAFFN), enhancing the generalization and transferability of automatic crack detection models. Wang et al. [83] proposed a dual-branch network architecture with an embedded self-attention guidance module (SGM) for defect detection in X-ray images of aluminum alloy castings. The model comprises a general feature network (GFN) and a subtle feature network (SFN), designed to enhance feature responses in small defect regions, suppress noise, and improve detection performance in complex environments. However, the method is sensitive to image parameters such as brightness and contrast, and improper settings may lead to performance degradation. Cheng et al. [84] proposed an innovative two-stage deep learning framework that significantly enhances defect detection performance in complex industrial scenarios through the integration of a CNN with an Image Adaptive Augmentation (IAA) module and a Dual-scale Defect Detection via Global-Local Feature fusion (DD-GLF) module. Cui et al. [44] constructed a public dataset, T-SWX-ray, and proposed a fine-grained micro-defect detection framework (SDCT) that combines semantic discrimination with contrast transformation for analyzing spiral weld seam X-ray images. Experiments demonstrated the model’s excellent performance in detecting small defects across multiple datasets. Zhang et al. [45] developed the NEU-WELD-2000 spiral welded pipe dataset and proposed the Texture-Enhanced Guided Detection Network (TEGDNet). By integrating a texture-enhanced guided detection approach with a super-resolution reconstruction decoder, the model effectively addresses challenges such as weak textures, strong interference, scale variation, and small target detection. Parlak et al. [46] combined YOLOv5 with traditional image processing techniques to identify internal defects in aluminum castings. Table 4 summarizes deep learning-based methods for X-ray industrial defect detection.

Defect classification involves categorizing detected or segmented defect regions into specific types without focusing on their exact spatial locations. Early classification models, such as LeNet-5, were preliminarily applied to small datasets; however, their performance was limited due to the computational and data constraints at that time. Subsequently, various classic convolutional CNN architectures were introduced to the analysis of X-ray defect images, significantly improving classification accuracy and feature extraction capabilities.
Liu et al. [20] constructed a fully convolutional network based on VGG-16 to classify weld defect images, achieving high accuracy on relatively small datasets. Yang et al. [21] utilized a pretrained AlexNet to extract features and classify welding defects in the GDXray dataset. Similarly, Ajmi et al. [85] fine-tuned a pretrained AlexNet model for weld defect classification in steel pipe joints, and by employing data augmentation, the classification accuracy approached 100%. In a comparable study, Nazarov et al. [86] employed the VGG-16 model for classifying weld defects in GDX-ray, achieving an accuracy of 86%. However, the study noted limitations due to insufficient training data. Regarding performance comparisons, Suyama et al. [87] conducted classification experiments on different regions of oil and gas pipeline X-ray images (e.g., weld seams and image quality indicators), evaluating the performance of AlexNet, VGG, GoogLeNet, and ResNet. Their results demonstrated that VGG achieved the highest accuracy but was more sensitive to noise. Furthermore, Jiang et al. [88] optimized a VGG-16-based defect classification network for aluminum alloy castings by incorporating attention mechanisms and weakly supervised learning strategies. Zhang et al. [22] introduced data augmentation using Wasserstein GAN (WGAN) and designed a dual-CNN ensemble model for defect classification; however, the ensemble framework was relatively simple. Li et al. [89] proposed a comprehensive data augmentation method for high-resolution X-ray weld defect classification to address the challenge of limited real samples. By constructing localized defect classification datasets and developing two data generation modes, Single Image Single Defect (SISD) and Single Image Multi Defects (SIMD), the method effectively alleviates the training difficulties of deep learning models under small-sample conditions. The study also evaluated the performance of 16 classification models and YOLO-series detection models, providing guidance for model selection and optimization.
In recent years, researchers have increasingly focused on model ensemble and data augmentation strategies to improve classification performance. Hu et al. [23] proposed a classification framework combining Inception-ResNet with image normalization, targeting internal defects in Aluminum Conductor Composite Core (ACCC), effectively mitigating interference caused by inconsistent image quality. Additionally, some studies have integrated classification models with detection frameworks to achieve end-to-end defect detection. Zhu et al. [52] presented a Faster R-CNN-based framework that combines a region proposal network (RPN) and classifier for defect detection and classification in tire X-ray images. Hou et al. [90] employed deep convolutional networks along with three resampling methods to address data imbalance in welding datasets, thereby enhancing classification performance. Say et al. [91] proposed an automated method combining data augmentation and CNNs to classify six defect categories, achieving an average accuracy of 92%. Benito et al. [11] released a novel dataset, RIAWELC, and trained an improved SqueezeNet model based on this dataset, achieving a classification accuracy exceeding 93%. A summary of deep learning-based methods for defect classification in X-ray industrial surface images is shown in Table 5.

Defect segmentation aims to achieve pixel-level annotation of defect regions within images, thereby clearly delineating the boundaries and morphology of defects. Depending on the nature of the output, segmentation methods can be broadly categorized into semantic segmentation and instance segmentation. In early studies, Long et al. [92] introduced the Fully Convolutional Network (FCN) by replacing the fully connected layers in traditional convolutional CNNs with upsampling layers, laying the foundation for the application of deep learning in semantic segmentation.
U-Net, with its symmetrical encoder–decoder architecture and skip connections, demonstrated strong boundary-awareness capabilities and has been widely adopted in industrial weld image segmentation. Jin et al. [1] applied U-Net in combination with CLAHE preprocessing and data augmentation to effectively extract weld shapes and positions. Yang et al. [93] proposed an improved U-Net architecture for small-scale samples, significantly enhancing localization accuracy. Zhang et al. [94] incorporated an Inception module into U-Net, improving the model’s ability to perceive small and blurry defects in oil and gas pipeline images. Zong et al. [95] developed a segmentation model integrating convolutional layers, channel attention, and multi-scale feature modules, which improved defect detection performance in ultra-high-voltage (UHV) weld images. Wang et al. [96] further enhanced feature extraction and representation by introducing a VGG-based encoder combined with CBAM and SENet attention mechanisms, although segmentation accuracy still left room for improvement. Finally, Golodov et al. [97], through comparative experiments, demonstrated that the automatic feature extraction capabilities of VGG-based models are effective not only for classification tasks but also for the segmentation of welding defects.
In deep learning–based automatic detection and segmentation of weld defects, attention mechanisms have been widely applied to enhance the model’s ability to extract defect features, thereby improving segmentation and detection accuracy. Channel attention and spatial attention are two common types of attention mechanisms [98]. Yang et al. [99] proposed an improved U-Net network that integrates channel attention (SE Block) with bidirectional convolutional LSTM (BiConvLSTM), effectively addressing challenges such as complex backgrounds, low contrast, and class imbalance. Additionally, the convolutional block attention module (CBAM), a hybrid mechanism combining channel and spatial attention, has been employed to further enhance feature representation. Yang et al. [100] further developed an end-to-end NDD-Net network for nondestructive defect detection, which leverages an improved CBAM mechanism and residual dense connection convolution blocks (RDCCB) to significantly enhance the segmentation and detection of small defects.
To alleviate the challenges of high annotation costs and limited sample availability, weakly supervised learning and generative modeling strategies have been increasingly adopted. The CC-RCNN model [24] achieves high-precision segmentation using only bounding box-level annotations, effectively reducing manual labeling costs while maintaining segmentation quality. Shao et al. [25] proposed an edge-refined mask network combined with CycleGAN for defect image generation, improving the segmentation accuracy of high-voltage cable defects to 91.6%. Du et al. [101] proposed a segmentation model based on the U-Net and ResNet101 architecture, incorporating a two-stream feature encoder module (TSFM), a gated multi-layer fusion module (GMFM), and a weighted IoU loss function to enhance the segmentation accuracy of X-ray images of aluminum alloy castings. Guo et al. [102] applied a two-stage segmentation approach to accurately extract regions of interest (ROIs) in small-diameter pipes, and used grayscale density clustering to generate suspected defect region (SDR) images, enabling effective detection of weld defects in X-ray images of small-diameter tubing. In 2024, Wang et al. [103] developed an improved U-Net–based segmentation algorithm for weld defects, using VGG as the encoder and incorporating both CBAM and SENet attention mechanisms to enhance feature extraction and representation. However, the segmentation performance still leaves room for further improvement.
Additionally, Han et al. [49] addressed the limitations of traditional visual inspection methods in cast defect detection, which are often inefficient and influenced by subjective factors. They developed a ResNet18-based framework that integrates an Adaptive Depth Selection Mechanism (ADSM) and an Adaptive Receptive Field Block (ARFB) to enhance the model’s ability to perceive similar and multi-scale defects. Liu et al. [104] proposed a Core-Contour Decomposition Network (CPDNet), which combines core feature learning with contour refinement to achieve precise segmentation of low-contrast X-ray defects; however, the labeling of training and testing samples remains costly and labor-intensive. Roger B. Tokime et al. [26] employed a SegNet-based deep convolutional network with an encoder–decoder architecture for automatic defect recognition (ADR) in high-throughput part production. Using semantic segmentation, the method effectively detected porosity defects in welded components, demonstrating the efficacy of automated approaches in industrial X-ray inspection. Li et al. [105] developed a dual U-shaped network architecture integrating multiple modules to achieve high-precision semantic segmentation of weld defects in pressure vessels and also introduced a new dataset named WSCR. Zuo et al. [106] proposed a multi-expert deep learning framework and hardware platform based on X-ray images for evaluating long-distance pipeline weld defects, which outperformed traditional methods in segmentation accuracy, detection performance, and noise robustness. Zhao et al. [43] proposed and released the SWRD dataset and conducted benchmark testing using YOLO-V8m as the representative model. The dataset supports three tasks: defect detection, classification, and segmentation, providing a standardized evaluation benchmark for industrial defect detection models. A summary of deep learning-based methods for defect segmentation in X-ray industrial surface images is shown in Table 6.

5 Performance Evaluation Metrics
In the field of industrial defect detection, performance evaluation metrics are generally categorized into two main types: accuracy metrics and efficiency metrics. Accuracy metrics are used to assess how precisely and reliably an algorithm performs in tasks such as defect classification, localization, and segmentation. Traditional image processing and machine learning approaches primarily focus on classification accuracy, segmentation quality, and detection precision. In contrast, efficiency metrics evaluate the algorithm’s computational speed and resource consumption, reflecting its real-time performance and practical applicability.
In industrial defect detection, a sample serves as the basic evaluation unit, with its definition depending on the task type, in classification tasks, it usually refers to an entire image; in detection tasks, it corresponds to a bounding box or candidate region; and in segmentation tasks, it refers to a pixel or mask region. In binary defect detection, defective samples are considered positive, while defect-free samples are considered negative. TP (True Positive) represents the number of positive samples correctly identified as positive, while TN (True Negative) denotes the number of negative samples correctly identified as negative. Conversely, FP (False Positive) corresponds to negative samples that are incorrectly classified as positive, and FN (False Negative) refers to positive samples that are incorrectly classified as negative. These metrics are commonly used to evaluate the accuracy and reliability of defect detection methods.
In defect classification tasks, commonly used performance evaluation metrics include Precision, Recall, F1-score, and Accuracy. Precision refers to the proportion of correctly predicted defective samples among all samples predicted as defective, serving as a measure of the model’s prediction accuracy. Recall represents the proportion of correctly identified defective samples among all actual defective samples in the dataset, reflecting the model’s ability to detect defects. A higher precision indicates lower random error, i.e., smaller variance, and reflects the stability of the prediction results. A higher recall implies a stronger capability of the algorithm to detect target defects. Their formulas are as follows:
In X-ray industrial component inspection, accuracy is often used as a preliminary performance metric. Particularly in tasks with balanced data distribution, accuracy can effectively reflect whether a model possesses strong detection capability. A higher accuracy indicates smaller systematic error, i.e., lower bias, which reflects the degree of deviation between the predicted results and the actual values. Its formula is as follows:
The F1-score is a key metric for evaluating the overall performance of classification models, especially in industrial inspection tasks with imbalanced class distributions. It combines precision and recall, reflecting the model’s ability to balance false positives and false negatives. In X-ray image-based defect detection, the F1-score is commonly used to assess the model’s practical effectiveness in identifying defective samples, helping to avoid situations where high accuracy masks poor defect detection performance. Its formula is:
The false positive rate (FPR) and false negative rate (FNR) are critical metrics for evaluating defect detection models. FPR refers to the proportion of normal regions that are incorrectly classified as defective. A high FNR indicates that critical defects, such as welding cracks or casting pores, are not detected, which may lead to equipment failures or even severe safety accidents. Therefore, in industries with stringent safety requirements—such as nuclear power, aerospace, and automotive manufacturing—minimizing FNR is often the primary objective in the design of detection systems. In contrast, a high FPR means that normal regions are erroneously marked as defective, which in large-scale production lines may result in excessive manual re-inspection and rework, thereby increasing production costs while reducing inspection efficiency. In X-ray inspection of steel pipes, tires, or aluminum castings, an excessively high FPR may force operators to spend additional time verifying defect-free regions, thus disrupting production schedules. The specific formulas are as follows:
In detection tasks, Average Precision (AP) comprehensively reflects a model’s localization and detection capabilities across various detection thresholds by calculating the area under the Precision-Recall curve. AP is commonly used in conjunction with the Intersection over Union (IoU) metric to evaluate the overall detection performance of models. In industry, standard evaluation protocols often employ AP@0.5 or mAP@ [0.5:0.95] to capture model performance under varying accuracy requirements. The calculation formula is as follows, where r denotes the recall. The integral represents the area under the Precision-Recall (PR) curve:
The computation of AP relies on the degree of overlap between the predicted bounding boxes and the ground truth boxes, which is measured by Intersection over Union. Mean Average Precision (mAP) is obtained by averaging the AP values across all defect categories, reflecting the model’s overall detection performance across multiple types of defects. The formula is:
Intersection over Union (IoU) and the Dice coefficient are core metrics used to evaluate the overlap between the predicted and ground truth regions in image segmentation tasks. IoU calculates the ratio of the intersection to the union of the predicted and actual regions, and is widely used in object detection and defect localization tasks. In contrast, the Dice coefficient measures the overlap relative to the average size of the two regions, making it more sensitive to small objects and boundary accuracy. As such, it is often used for evaluating the segmentation of fine-grained defects such as weld lines and cracks. X denotes the predicted result, and Y represents the ground truth (GT). The formulas are as follows:
Training with the Dice loss function allows for more direct optimization of the Dice coefficient, leading to improved segmentation performance. The Dice loss is defined as follows:
In multi-class segmentation tasks, the Mean Intersection over Union (MIoU), calculated as the average IoU across all categories, provides a more comprehensive measure of the model’s segmentation accuracy over all defect classes. It is defined as:
In industrial X-ray defect detection, some studies have also focused on preprocessing tasks such as image denoising, enhancement, or reconstruction, where it is essential to preserve key defect-related details. To this end, image quality metrics are introduced as complementary evaluation tools. Peak Signal-to-Noise Ratio (PSNR) is a commonly used indicator for assessing image processing quality, reflecting the similarity between the preprocessed or denoised image and the original image. In industrial X-ray defect detection tasks, PSNR is often used in conjunction with IoU or Dice coefficients. A higher PSNR indicates effective noise suppression while retaining the necessary defect details, thereby contributing to improved accuracy in detection and segmentation task.
X-ray imaging, owing to its exceptional penetration capabilities and non-contact nature, has emerged as a pivotal technique for the detection of both internal and surface defects in industrial components. In recent years, the advancement of automatic defect detection and precise localization algorithms has markedly improved the automation level and accuracy of X-ray-based inspection systems in industrial quality control. Nonetheless, several persistent challenges continue to hinder their practical deployment.
In deep learning-based defect detection, model training is frequently constrained by insufficient datasets and pronounced class imbalance [99]. Normal samples overwhelmingly outnumber defective ones, and substantial discrepancies exist among different defect categories. Common and easily collected defect types dominate, whereas rare instances such as fine cracks are scarce. Consequently, models tend to overfit to prevalent defects during training and exhibit limited generalization to rare categories. Furthermore, acquiring high-quality, annotated industrial defect images entails considerable cost, exacerbating data scarcity and hindering model performance. To alleviate these limitations, data augmentation and Generative Adversarial Networks (GANs) can be employed to rebalance the training distribution and mitigate class bias. Transfer learning also provides a viable strategy, whereby pretrained models (e.g., ResNet, VGG) are adapted through fine-tuning; however, significant domain discrepancies between natural image datasets and industrial defect images may compromise detection accuracy.
The applicability of automated methods varies across industrial scenarios. In environments where defect features are prominent, image quality is stable, and interference is minimal, automated detection systems can effectively replace manual visual inspection. However, in cases involving tiny defects or complex texture backgrounds, existing methods still require further improvement. Small defects such as cracks, micropores, and slag inclusions are prone to false detections and missed detections due to their small size and high similarity to material textures or noise. At the model level, reference [97] introduced an Attention Fusion Block (AFB) and Residual Dense Block (RDB), where the attention mechanism adaptively reweights feature channels and spatial positions to suppress background interference. To further enhance the detection performance for tiny defects [107], spatial attention mechanisms are recommended, as they can strengthen feature focus while avoiding the computational redundancy that may arise from channel attention.
In addition, the computational efficiency and real-time performance of existing approaches remain insufficient for high-speed production lines. Processing high-resolution images is time-consuming, making effective online deployment challenging. From an algorithmic perspective, lightweight models such as SqueezeNet [11] and MobileNet [22] can be utilized and further developed. Their core goal is to minimize computational costs while achieving a favorable balance between accuracy and efficiency.
In summary, this paper comprehensively reviews recent advances in X-ray defect detection, focusing on representative deep learning methods from the past five years. It emphasizes the increasing industrial importance of X-ray inspection and advocates establishing a standardized public evaluation platform integrating datasets, benchmarks, and metrics to enable fair comparisons and accelerate development. Future research will concentrate on weakly supervised learning, cross-modal fusion, and lightweight model design to overcome annotation and deployment challenges, thereby reducing computational demands while enhancing the practicality and reliability of detection systems. This work aims to serve as a clear and detailed reference for researchers and practitioners, promoting continuous innovation and broader adoption of X-ray defect detection technologies across industrial domains.
Acknowledgement: This manuscript does not include content generated by artificial intelligence. AI translation tools were solely employed for proofreading some sentences.
Funding Statement: This work was supported in part by the Project of National Key Laboratory of Advanced Casting Technologies under Grant CAT2023-002.
Author Contributions: The authors confirm contribution to the paper as follows: Conceptualization, Xin Wen; Methodology, Siru Chen; Validation, Siru Chen and Xin Wen; Formal analysis, Han Yu, Xingjie Li and Ling Zhong; Investigation, Siru Chen and Xin Wen; Resources, Kechen Song; Data curation, Kechen Song; Writing—original draft preparation, Xin Wen, Siru Chen, Han Yu, Xingjie Li, Ling Zhong and Kechen Song; Writing—review and editing, Xin Wen, Siru Chen, Han Yu, Ling Zhong and Kechen Song; Visualization, Siru Chen and Han Yu; Supervision, Kechen Song; Project administration, Xin Wen; Funding acquisition, Xin Wen. All authors reviewed the results and approved the final version of the manuscript.
Availability of Data and Materials: Not applicable.
Ethics Approval: Not applicable.
Conflicts of Interest: The authors declare no conflicts of interest to report regarding the present study.
References
1. Jin G, Oh S, Lee Y, Shin S. Extracting weld bead shapes from radiographic testing images with U-Net. Appl Sci. 2021;11(24):12051. doi:10.3390/app112412051. [Google Scholar] [CrossRef]
2. Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C, et al. SSD: single shot multibox detector. In: Computer Vision—ECCV 2016. Cham, Switzerland: Springer International Publishing; 2016. p. 21–37. doi:10.1007/978-3-319-46448-0_2. [Google Scholar] [CrossRef]
3. Medak D, Posilović L, Subašić M, Budimir M, Lončarić S. Automated defect detection from ultrasonic images using deep learning. IEEE Trans Ultrason Ferroelectr Freq Control. 2021;68(10):3126–34. doi:10.1109/TUFFC.2021.3081750. [Google Scholar] [PubMed] [CrossRef]
4. Ghorai S, Mukherjee A, Gangadaran M, Dutta PK. Automatic defect detection on hot-rolled flat steel products. IEEE Trans Instrum Meas. 2012;62(3):612–21. doi:10.1109/TIM.2012.2218677. [Google Scholar] [CrossRef]
5. Ebayyeh AARMA, Mousavi A. A review and analysis of automatic optical inspection and quality monitoring methods in electronics industry. IEEE Access. 2020;8:183192–183271. doi:10.1109/ACCESS.2020.3029127. [Google Scholar] [CrossRef]
6. López de la Rosa F, Sánchez-Reolid R, Gómez-Sirvent JL, Morales R, Fernández-Caballero A. A review on machine and deep learning for semiconductor defect classification in scanning electron microscope images. Appl Sci. 2021;11(20):9508. doi:10.3390/app11209508. [Google Scholar] [CrossRef]
7. Masad E, Jandhyala VK, Dasgupta N, Somadevan N, Shashidhar N. Characterization of air void distribution in asphalt mixes using X-ray computed tomography. J Mater Civ Eng. 2002;14(2):122–9. doi:10.1061/(asce)0899-1561(2002)14:. [Google Scholar] [CrossRef]
8. Abd Halim S, Ibrahim A, Jayes MI, Manurung YHP. Weld defect features extraction on digital radiographic image using Chan-Vese model. In: Proceedings of the IEEE 9th International Colloquium on Signal Processing and Its Applications (CSPA); 2013 Mar 8–10; Kuala Lumpur, Malaysia. Piscataway, NJ, USA: IEEE. p. 67–72. doi:10.1109/cspa.2013.6530016. [Google Scholar] [CrossRef]
9. Tang Y, Zhang X, Li X, Guan X. Application of a new image segmentation method to detection of defects in castings. Int J Adv Manuf Technol. 2009;43(5):431–9. doi:10.1007/s00170-008-1720-1. [Google Scholar] [CrossRef]
10. Haobo Y. A survey of industrial surface defect detection based on deep learning. In: Proceedings of the 2024 International Conference on Cyber-Physical Social Intelligence (ICCSI); 2024 Dec 18–20; Doha, Qatar. Piscataway, NJ, USA: IEEE. p. 1–6. doi:10.1109/ICCSI62669.2024.10799405. [Google Scholar] [CrossRef]
11. Totino B, Spagnolo F, Perri S. RIAWELC: a Novel dataset of radiographic images for automatic weld defects classification. Int J Electr Comput Eng Res. 2023;3(1):13–7. doi:10.53375/ijecer.2023.320. [Google Scholar] [CrossRef]
12. Ji C, Wang H, Li H. Defects detection in weld joints based on visual attention and deep learning. NDT E Int. 2023;133(6):102764. doi:10.1016/j.ndteint.2022.102764. [Google Scholar] [CrossRef]
13. Ferguson M, Ak R, Lee YTT, Law KH. Automatic localization of casting defects with convolutional neural networks. In: Proceedings of the 2017 IEEE International Conference on Big Data (Big Data); 2017 Dec 11–14; Boston, MA, USA. Piscataway, NJ, USA: IEEE. p. 1726–35. doi:10.1109/BigData.2017.8258115. [Google Scholar] [CrossRef]
14. Alaknanda, Anand RS, Kumar P. Flaw detection in radiographic weldment images using morphological watershed segmentation technique. NDT E Int. 2009;42(1):2–8. doi:10.1016/j.ndteint.2008.06.005. [Google Scholar] [CrossRef]
15. Dong X, Taylor CJ, Cootes TF. A random forest-based automatic inspection system for aerospace welds in X-ray images. IEEE Trans Autom Sci Eng. 2020;18(4):2128–41. doi:10.1109/TASE.2020.3039115. [Google Scholar] [CrossRef]
16. Cozma A, Harris L, Qi H, Ji P, Guo W, Yuan S. Defect detection in tire X-ray images: conventional methods meet deep structures. arXiv:2402.18527. 2024. [Google Scholar]
17. Wang Y, Shi F, Tong X. A welding defect identification approach in X-ray images based on deep convolutional neural networks. In: Proceedings of the International Conference on Intelligent Computing. Cham, Switzerland: Springer International Publishing; 2019. p. 53–64. doi:10.1007/978-3-030-26766-7_6. [Google Scholar] [CrossRef]
18. Liu W, Shan S, Chen H, Wang R, Sun J, Zhou Z. X-ray weld defect detection based on AF-RCNN. Weld World. 2022;66(6):1165–77. doi:10.1007/s40194-022-01281-w. [Google Scholar] [CrossRef]
19. Zuo F, Liu J, Fu M, Wang L, Zhao Z. STMA-Net: a spatial transformation-based multiscale attention network for complex defect detection with X-ray images. IEEE Trans Instrum Meas. 2024;73:1–11. doi:10.1109/TIM.2024.3376014. [Google Scholar] [CrossRef]
20. Liu B, Zhang X, Gao Z, Chen L. Weld defect images classification with VGG16-based neural network. In: Proceedings of the International Forum on Digital TV and Wireless Multimedia Communications. Singapore: Springer; 2017. p. 215–23. doi:10.1007/978-981-10-8108-8_20. [Google Scholar] [CrossRef]
21. Yang L, Fan J, Huo B, Liu Y. Inspection of welding defect based on multi-feature fusion and a convolutional network. J Nondestruct Eval. 2021;40(4):90. doi:10.1007/s10921-021-00823-4. [Google Scholar] [CrossRef]
22. Zhang H, Chen Z, Zhang C, Xi J, Le X. Weld defect detection based on deep learning method. In: Proceedings of the 2019 IEEE 15th International Conference on Automation Science and Engineering (CASE); 2019 Aug 22–26; Vancouver, BC, Canada. Piscataway, NJ, USA: IEEE. p. 1574–9. doi:10.1109/COASE.2019.8842998. [Google Scholar] [CrossRef]
23. Hu Y, Wang J, Zhu Y, Wang Z, Chen D, Zhang J, et al. Automatic defect detection from X-ray scans for aluminum conductor composite core wire based on classification neutral network. NDT E Int. 2021;124(2):102549. doi:10.1016/j.ndteint.2021.102549. [Google Scholar] [CrossRef]
24. Zhang B, Wang X, Cui J, Wu J, Wang X, Li Y, et al. Welding defects classification by weakly supervised semantic segmentation. NDT E Int. 2023;138:102899. doi:10.1016/j.ndteint.2023.102899. [Google Scholar] [CrossRef]
25. Shao F, Qi X, Hou B, Dong J, Zhu W, Jie J. Instance segmentation based non-destructive inspection of high-voltage cable defects. In: Proceedings of the 2024 7th International Conference on Robotics, Control and Automation Engineering (RCAE); 2024 Mar 15–17; Beijing, China. Piscataway, NJ, USA: IEEE. p. 429–33. doi:10.1109/RCAE62637.2024.10834264. [Google Scholar] [CrossRef]
26. Tokime RB, Maldague X, Perron L. Automatic defect detection for X-ray inspection: Identifying defects with deep convolutional network. In: Proceedings of the Canadian Institute for Non-destructive Evaluation (CINDE); 2019 Jun 18–20; Edmonton, AB, Canada. p. 18–20. [Google Scholar]
27. Matsumoto T, Aoyama K, Goto K, Kajikawa K, Sugimoto K, Iwata K. Development of high accuracy welding defect detection technique for X-ray images. Mitsubishi Heavy Ind Tech Rev. 2022;59(1):1–8. [Google Scholar]
28. Wang X, Zscherpel U, Tripicchio P, D’Avella S, Zhang B, Wu J, et al. A comprehensive review of welding defect recognition from X-ray images. J Manuf Process. 2025;140(4):161–80. doi:10.1016/j.jmapro.2025.02.039. [Google Scholar] [CrossRef]
29. Liu X, Liu J, Wang Z, Wang L, Zhang H. Basic-class and cross-class hybrid feature learning for class-imbalanced weld defect recognition. IEEE Trans Ind Inform. 2022;19(9):9436–46. doi:10.1109/TII.2022.3228702. [Google Scholar] [CrossRef]
30. Ramírez DP, Veitía BDR, Ariosa PF, Hernández AE, Gilart RA, Roca Á.S, et al. Pore segmentation in industrial radiographic images using adaptive thresholding and Morphological analysis. Trends Agric Environ Sci. 2023:e230008. doi:10.46420/taes.e230008. [Google Scholar] [CrossRef]
31. Ramou N. Segmentation of weld defects using multiphase level set by the piecewise-smooth Mumford-Shah model. Russ J Nondestruct Test. 2019;55(2):155–61. doi:10.1134/s1061830919020074. [Google Scholar] [CrossRef]
32. Li XG, Miao CY, Wang J, Zhang Y. Automatic defect detection method for the steel cord conveyor belt based on its X-ray images. In: Proceedings of the 2011 International Conference on Control, Automation and Systems Engineering (CASE); 2011 Jul 30–31; Singapore. Piscataway, NJ, USA: IEEE. p. 1–4. doi:10.1109/ICCASE.2011.5997624. [Google Scholar] [CrossRef]
33. Ren J, Ren R, Green M, Huang X. Defect detection from X-ray images using a three-stage deep learning algorithm. In: Proceedings of the 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE); 2019 May 5–8; Edmonton, AB, Canada. Piscataway, NJ, USA: IEEE. p. 1–4. doi:10.1109/CCECE.2019.8861944. [Google Scholar] [CrossRef]
34. Boaretto N, Centeno TM. Automated detection of welding defects in pipelines from radiographic images DWDI. NDT E Int. 2017;86(10):7–13. doi:10.1016/j.ndteint.2016.11.003. [Google Scholar] [CrossRef]
35. Du W, Shen H, Fu J, Zhang G, He Q. Approaches for improvement of the X-ray image defect detection of automobile casting aluminum parts based on deep learning. NDT E Int. 2019;107:102144. doi:10.1016/j.ndteint.2019.102144. [Google Scholar] [CrossRef]
36. Farrier M, Achterkirchen TG, Weckler GP, Mrozack A. Very large area CMOS active-pixel sensor for digital radiography. IEEE Trans Electron Devices. 2009;56(11):2623–31. doi:10.1109/ted.2009.2031001. [Google Scholar] [CrossRef]
37. Kim HK, Ahn JK, Cho G. Development of a lens-coupled CMOS detector for an X-ray inspection system. Nucl Instrum Methods Phys Res Sect A. 2005;545(1–2):210–6. doi:10.1016/j.nima.2005.01.310. [Google Scholar] [CrossRef]
38. Mery D, Riffo V, Zscherpel U, Mondragon G, Lillo I, Lobel H, et al. GDXray: the database of X-ray images for nondestructive testing. J Nondestruct Eval. 2015;34(4):42. doi:10.1007/s10921-015-0315-7. [Google Scholar] [CrossRef]
39. Guo W, Qu H, Liang L. WDXI: the dataset of X-ray image for weld defects. In: Proceedings of the 2018 14th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD); 2018 Jul 28–30; Piscataway, NJ, USA: IEEE. p. 1051–55. doi:10.1109/FSKD.2018.8686975. [Google Scholar] [CrossRef]
40. Parlak IE, Emel E. Deep learning-based detection of aluminum casting defects and their types. Appl Artif Intell. 2023;118(2):105636. doi:10.1016/j.engappai.2022.105636. [Google Scholar] [CrossRef]
41. Yang D, Cui Y, Yu Z, Yuan H. Deep learning based steel pipe weld defect detection. Appl Artif Intell. 2021;35(15):1237–49. doi:10.1080/08839514.2021.1975391. [Google Scholar] [CrossRef]
42. Cui W, Song K, Wang Y, Lv G, Yan Y, Yu H. A rapid screening method for suspected defects in steel pipe welds by combining correspondence mechanism and normalizing flow. IEEE Trans Ind Inform. 2024;20(9):11171–11180. doi:10.1109/TII.2024.3399934. [Google Scholar] [CrossRef]
43. Zhao X, Wu J, Zhang B, Wen H, Wang X, Li Y, et al. SWRD: a dataset of radiographic image of seam weld for defect detection. J Nondestruct Eval. 2025;44(2):50. doi:10.21203/rs.3.rs-5369992/v1. [Google Scholar] [CrossRef]
44. Cui W, Song K, Zhang Y, Zhang Y, Lv G, Yan Y. Fine-grained tiny defect detection in spiral welds: a joint framework combining semantic discrimination and contrast transformation. IEEE Trans Instrum Meas. 2025;74:1–15. doi:10.1109/tim.2025.3551901. [Google Scholar] [CrossRef]
45. Zhang Y, Song K, Cui W, Yan Y, Lv G, Zhang Y. TEGDNet: texture enhancement guided detection network for spiral welded pipeline defect detection. Measurement. 2025;256(1):118052. doi:10.1016/j.measurement.2025.118052. [Google Scholar] [CrossRef]
46. Parlak İE, Emel E. Deep learning-based detection of internal defect types and their grades in high-pressure aluminum castings. Measurement. 2025;242(14):116119. doi:10.1016/j.measurement.2024.116119. [Google Scholar] [CrossRef]
47. Malarvel M, Singh H. An autonomous technique for weld defects detection and classification using multi-class support vector machine in X-radiography image. Optik. 2021;231(10):166342. doi:10.1016/j.ijleo.2021.166342. [Google Scholar] [CrossRef]
48. Duan F, Yin S, Song P, Zhang W, Zhu C, Yokoi H. Automatic welding defect detection of X-ray images by using cascade adaboost with penalty term. IEEE Access. 2019;7:125929–38. doi:10.1109/access.2019.2927258. [Google Scholar] [CrossRef]
49. Yu H, Li X, Song K, Shang E, Liu H, Yan Y. Adaptive depth and receptive field selection network for defect semantic segmentation on castings X-rays. NDT E Int. 2020;116(6):102345. doi:10.1016/j.ndteint.2020.102345. [Google Scholar] [CrossRef]
50. Hernandez S, Saez D, Mery D, Sequeira M. Automated defect detection in aluminium castings and welds using neuro-fuzzy classifiers. In: Proceedings of the 16th World Conference on Non-Destructive Testing (WCNDT 2004); 2004 Aug 30–Sep 3; Montreal, QC, Canada. [Google Scholar]
51. Shaloo M, Schnall M, Klein T, Huber N, Reitinger B. A review of non-destructive testing (NDT) techniques for defect detection: application to fusion welding and future wire arc additive manufacturing processes. Materials. 2022;15(10):3697. doi:10.3390/ma15103697. [Google Scholar] [PubMed] [CrossRef]
52. Zhu Q, Ai X. The defect detection algorithm for tire X-ray images based on deep learning. In: Proceedings of the 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC); 2018 Jun 27–29; Chongqing, China. Piscataway, NJ, USA: IEEE. p. 138–42. doi:10.1109/ICIVC.2018.8492908. [Google Scholar] [CrossRef]
53. Mahmoudi A, Regragui F. Welding defect detection by segmentation of radiographic images,. In: Proceedings of the 2009 WRI World Congress on Computer Science and Information Engineering; 2009 Mar 31–Apr 2; Los Angeles, CA, USA. Piscataway, NJ, USA: IEEE. p. 111–5. doi:10.1109/CSIE.2009.501. [Google Scholar] [CrossRef]
54. El-Tokhy MS, Mahmoud II. Classification of welding flaws in gamma radiography images based on multi-scale wavelet packet feature extraction using support vector machine. J Nondestruct Eval. 2015;34(4):34. doi:10.1007/s10921-015-0305-9. [Google Scholar] [CrossRef]
55. Movafeghi A, Mirzapour M, Yahaghi E. Using nonlocal operators for measuring dimensions of defects in radiograph of welded objects. Eur Phys J Plus. 2021;136(6):655. doi:10.1140/epjp/s13360-021-01652-0. [Google Scholar] [CrossRef]
56. Rajab MI, El-Benawy TA, Al-Hazmi MW. Application of frequency domain processing to X-ray radiographic images of welding defects. J X-Ray Sci Technol. 2007;15(3):147–56. doi:10.3233/xst-2007-00178. [Google Scholar] [CrossRef]
57. Saberironaghi A, Ren J, El-Gindy M. Defect detection methods for industrial products using deep learning techniques: a review. Algorithms. 2023;16(2):95. doi:10.3390/a16020095. [Google Scholar] [CrossRef]
58. Ajmi C, El Ferchichi S, Laabidi K. New procedure for weld defect detection based-gabor filter. In: Proceedings of the 2018 International Conference on Advanced Systems and Electric Technologies (IC_ASET); 2018 Mar 24–26; Hammamet, Tunisia. Piscataway, NJ, USA: IEEE. p. 11–6. doi:10.1109/ASET.2018.8379826. [Google Scholar] [CrossRef]
59. Shao J, Du D, Chang B, Shi H. Automatic weld defect detection based on potential defect tracking in real-time radiographic image sequence. NDT E Int. 2012;46:14–21. doi:10.1016/j.ndteint.2011.10.008. [Google Scholar] [CrossRef]
60. Malarvel M, Sethumadhavan G, Bhagi PCR, Kar S, Saravanan T, Krishnan A. Anisotropic diffusion based denoising on X-radiography images to detect weld defects. Digit Signal Process. 2017;68:112–26. doi:10.1016/j.dsp.2017.05.014. [Google Scholar] [CrossRef]
61. Yahaghi E, Mirzapour M, Movafeghi A, Rokrok B. Interlaced bilateral filtering and wavelet thresholding for flaw detection in the radiography of weldments. Eur Phys J Plus. 2020;135(1):42. doi:10.1140/epjp/s13360-020-00119-y. [Google Scholar] [CrossRef]
62. Wang M, Chai L. Application of an improved watershed algorithm in welding image segmentation. Trans China Weld Inst. 2011;47(5):352–7. doi:10.1134/S106183091105010X. [Google Scholar] [CrossRef]
63. Tian Y, Du D, Cai G, Wang L, Zhang H. Automatic defect detection in X-ray images using image data fusion. Tsinghua Sci Technol. 2006;11(6):720–4. doi:10.1016/s1007-0214(06)70255-3. [Google Scholar] [CrossRef]
64. Shen J, Chen P, Su L, Shi T, Tang Z, Liao G. X-ray inspection of TSV defects with self-organizing map network and Otsu algorithm. Microelectron Reliab. 2016;67:129–34. doi:10.1016/j.microrel.2016.10.011. [Google Scholar] [CrossRef]
65. Kamalakannan A, Rajamanickam G. Spatial smoothing based segmentation method for internal defect detection in X-ray images of casting components. In: Proceedings of the 2017 Trends in Industrial Measurement and Automation (TIMA); 2017 Dec 7–9; Chennai, India. Piscataway, NJ, USA: IEEE. p. 1–6. doi:10.1109/TIMA.2017.8064796. [Google Scholar] [CrossRef]
66. Wang X, Huang D, Xu H. An efficient local Chan-Vese model for image segmentation. Pattern Recogn. 2010;43(3):603–18. doi:10.1016/j.patcog.2009.08.002. [Google Scholar] [CrossRef]
67. Abdelkader R, Ramou N, Khorchef M, Chetih N, Boutiche Y. Segmentation of X-ray image for welding defects detection using an improved Chan-Vese model. Mater Today Proc. 2021;42(2):2963–7. doi:10.1016/j.matpr.2020.12.806. [Google Scholar] [CrossRef]
68. Abdelkader R, Ramou N, Khorchef M. Welding defects detection in radiographic images using an improved denoising technique combined with an enhanced Chan-Vese model. Int J Eng Res Afr. 2022;60:155–72. doi:10.4028/p-w863h3. [Google Scholar] [CrossRef]
69. Radi D, Abo-Elsoud MEA, Khalifa F. Accurate segmentation of weld defects with horizontal shapes. NDT E Int. 2022;126(10):102599. doi:10.1016/j.ndteint.2021.102599. [Google Scholar] [CrossRef]
70. Shao J, Shi H, Du D, Wang L, Cao H. Automatic weld defect detection in real-time X-ray images based on support vector machine. In: Proceedings of the 2011 4th International Congress on Image and Signal Processing (CISP); 2011 Oct 15–17; Shanghai, China. Piscataway, NJ, USA: IEEE. p. 1842–6. doi:10.1109/CISP.2011.6100637. [Google Scholar] [CrossRef]
71. Patil RV, Reddy YP. An autonomous technique for multi class weld imperfections detection and classification by support vector machine. J Nondestruct Eval. 2021;40(3):76. doi:10.1007/s10921-021-00801-w. [Google Scholar] [CrossRef]
72. Wu B, Zhou J, Ji X, Yin Y, Shen X. Research on approaches for computer aided detection of casting defects in X-ray images with feature engineering and machine learning. Procedia Manuf. 2019;37(3):394–401. doi:10.1016/j.promfg.2019.12.065. [Google Scholar] [CrossRef]
73. Ramana EV, Penekalapati SV, Namala KK. Identification of weld sub-surface defects by radiographic images using texture features. E3S Web Conf. 2024;552(3):01017. doi:10.1051/e3sconf/202455201017. [Google Scholar] [CrossRef]
74. Liu M, Chen Y, Xie J, He L, Zhang Y. LF-YOLO: a lighter and faster YOLO for weld defect detection of X-ray image. IEEE Sens J. 2023;23(7):7430–9. doi:10.1109/JSEN.2023.3247006. [Google Scholar] [CrossRef]
75. Zhang R, Liu D, Bai Q, Fu B, Hu J, Song J. Research on X-ray weld seam defect detection and size measurement method based on neural network self-optimization. Eng Appl Artif Intell. 2024;133(1):108045. doi:10.1016/j.engappai.2024.108045. [Google Scholar] [CrossRef]
76. Su G, Su X, Wang Q, Luo L, Lu W. Research on X-ray weld defect detection of steel pipes by integrating ECA and EMA dual attention mechanisms. Appl Sci. 2025;15(8):4519. doi:10.3390/app15084519. [Google Scholar] [CrossRef]
77. Zuo F, Liu J, Zhang H, Chen Z, Yan B, Wang L. A complex welding defect detection method based on Active Learning in pipeline transportation system. IEEE Trans Instrum Meas. 2025;74:1–12. doi:10.1109/TIM.2025.3551482. [Google Scholar] [CrossRef]
78. García Pérez A, Gómez Silva MJ, De La Escalera Hueso A. Automated defect recognition of castings defects using neural networks. J Nondestruct Eval. 2022;41(1):11. doi:10.1007/s10921-021-00842-1. [Google Scholar] [CrossRef]
79. Zuo F, Liu J, Fu M, Lu J, Liu H. An effective detection method for complex weld defects based on adaptive feature pyramid. In: Proceedings of the 2023 CAA Symposium on Fault Detection, Supervision and Safety for Technical Processes (SAFEPROCESS); 2023 Sep 22–24; Beijing, China. Piscataway, NJ, USA: IEEE. p. 1–5. doi:10.1109/safeprocess58597.2023.10295953. [Google Scholar] [CrossRef]
80. Fu JL, Shen K. Automated detection of defects with casting DR image based on deep learning. In: Proceedings of the 2021 IEEE Far East NDT New Technology & Application Forum (FENDT); 2021; Kunming, China. Piscataway, NJ, USA: IEEE. p. 58–62. doi:10.1109/FENDT54151.2021.9749682. [Google Scholar] [CrossRef]
81. Wang X, Zhang B, Yu X. Zoom in on the target network for the prediction of defective images and welding defects’ location. NDT E Int. 2024;143(2):103059. doi:10.1016/j.ndteint.2024.103059. [Google Scholar] [CrossRef]
82. Zuo F, Liu J, Zhao X, Chen L, Wang L. An X-ray-based automatic welding defect detection method for special equipment system. IEEE/ASME Trans Mechatron. 2023;29(3):2241–52. doi:10.1109/TMECH.2023.3327713. [Google Scholar] [CrossRef]
83. Wang Y, Hu C, Chen K, Yin Z. Self-attention guided model for defect detection of aluminium alloy casting on X-ray image. Comput Electr Eng. 2020;88(4):106821. doi:10.1016/j.compeleceng.2020.106821. [Google Scholar] [CrossRef]
84. Cheng H, Jiang H, Jing D, Huang L, Gao J, Zhang Y, et al. Multiscale welding defect detection method based on image adaptive enhancement. Knowl Based Syst. 2025;327(11):114174. doi:10.1016/j.knosys.2025.114174. [Google Scholar] [CrossRef]
85. Ajmi C, Zapata J, Elferchichi S, Zaafouri M, Laabidi K. Deep learning technology for weld defects classification based on transfer learning and activation features. Adv Mater Sci Eng. 2020;2020(1):1574350. doi:10.1155/2020/1574350. [Google Scholar] [CrossRef]
86. Nazarov RM, Gizatullin ZM, Konstantinov ES. Classification of defects in welds using a convolution neural network. In: Proceedings of the 2021 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus); 2021 Jan 26–29; St. Petersburg, Russia. Piscataway, NJ, USA: IEEE. p. 1641–4. doi:10.1109/ElConRus51938.2021.9396301. [Google Scholar] [CrossRef]
87. Suyama FM, Delgado MR, Da Silva RD, Centeno TM. Deep neural networks based approach for welded joint detection of oil pipelines in radiographic images with Double Wall Double Image exposure. NDT E Int. 2019;105:46–55. doi:10.1016/j.ndteint.2019.05.002. [Google Scholar] [CrossRef]
88. Jiang L, Wang Y, Tang Z, Miao Y, Chen S. Casting defect detection in X-ray images using convolutional neural networks and attention-guided data augmentation. Measurement. 2021;170:108736. doi:10.1016/j.measurement.2020.108736. [Google Scholar] [CrossRef]
89. Li L, Wang P, Ren J, Lv Z, Li X, Gao H, et al. Synthetic data augmentation for high-resolution X-ray welding defect detection and classification based on a small number of real samples. Eng Appl Artif Intell. 2024;133(4):108379. doi:10.1016/j.engappai.2024.108379. [Google Scholar] [CrossRef]
90. Hou W, Wei Y, Jin Y, Zhu C. Deep features based on a DCNN model for classifying imbalanced weld flaw types. Measurement. 2019;131(5):482–9. doi:10.1016/j.measurement.2018.09.011. [Google Scholar] [CrossRef]
91. Say D, Zidi S, Qaisar SM, Krichen M. Automated categorization of multiclass welding defects using the X-ray image augmentation and convolutional neural network. Sensors. 2023;23(14):6422. doi:10.3390/s23146422. [Google Scholar] [PubMed] [CrossRef]
92. Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2015 Jun 7–12; Boston, MA, USA. Piscataway, NJ, USA: IEEE. p. 3431–40. doi:10.1109/CVPR.2015.7298965. [Google Scholar] [CrossRef]
93. Yang L, Wang H, Huo B, Li F, Liu Y. An automatic welding defect location algorithm based on deep learning. NDT E Int. 2021;120(12):102435. doi:10.1016/j.ndteint.2021.102435. [Google Scholar] [CrossRef]
94. Zhang S, Wang X, Zhang T. Combining multi-scale U-Net with transformer for welding defect detection of oil/gas pipeline. IEEE Access. 2025;13(4):5437–45. doi:10.1109/ACCESS.2024.3521220. [Google Scholar] [CrossRef]
95. Zong Y, Liu L, Guo D, Zhang H, Shen M. A novel method for segmentation and detection of weld defects in UHV equipment based on multiscale feature fusion. Russ J Nondestruct Test. 2024;60(11):1305–13. doi:10.1134/s1061830924602903. [Google Scholar] [CrossRef]
96. Wang X, He F, Huang X. A new method for deep learning detection of defects in X-ray images of pressure vessel welds. Sci Rep. 2024;14(1):6312. doi:10.1038/s41598-024-56794-9. [Google Scholar] [PubMed] [CrossRef]
97. Golodov VA, Maltseva AA. Approach to weld segmentation and defect classification in radiographic images of pipe welds. NDT E Int. 2022;127:102597. doi:10.1016/j.ndteint.2021.102597. [Google Scholar] [CrossRef]
98. Xu L, Dong S, Wei H, Ren Q, Huang J, Liu J. Defect signal intelligent recognition of weld radiographs based on YOLO V5-IMPROVEMENT. J Manuf Process. 2023;99(1):373–81. doi:10.1016/j.jmapro.2023.05.058. [Google Scholar] [CrossRef]
99. Yang L, Song S, Fan J, Huo B, Li E, Liu Y. An automatic deep segmentation network for pixel-level welding defect detection. IEEE Trans Instrum Meas. 2021;71:1–10. doi:10.1109/tim.2021.3127645. [Google Scholar] [CrossRef]
100. Yang L, Fan J, Huo B, Li E, Liu Y. A nondestructive automatic defect detection method with pixelwise segmentation. Knowl-Based Syst. 2022;242(12):108338. doi:10.1016/j.knosys.2022.108338. [Google Scholar] [CrossRef]
101. Du W, Shen H, Fu J. Automatic defect segmentation in X-ray images based on deep learning. IEEE Trans. Ind Electron. 2020;68(12):12912–12920. doi:10.1109/TIE.2020.3047060. [Google Scholar] [CrossRef]
102. Guo Y, Gao W, Wang Z. Convolutional neural network based defect detection in small diameter pipe weld. In: Proceedings of the 2024 9th International Conference on Intelligent Computing and Signal Processing (ICSP); 2024 Apr 19–21; Xi’an, China. Piscataway, NJ, USA: IEEE. p. 1564–8. doi:10.1109/ICSP62122.2024.10743247. [Google Scholar] [CrossRef]
103. Wang S, Zhu B, Gao W, Wang Z. Weld defect segmentation algorithm based on improved U-net. In: Proceedings of the 2024 6th International Conference on Intelligent Control, Measurement and Signal Processing (ICMSP); 2024 May 24–26; Hangzhou, China. Piscataway, NJ, USA: IEEE. p. 665–8. doi:10.1109/ICMSP64464.2024.10866549. [Google Scholar] [CrossRef]
104. Liu X, Liu J, Zhang H, Zhang H. Low-contrast X-ray image defect segmentation via a novel core-profile decomposition network. Comput Ind. 2024;161(9):104123. doi:10.1016/j.compind.2024.104123. [Google Scholar] [CrossRef]
105. Li X, Wei Y, Lv Z, Wang P, Li L, Sun M, et al. High resolution weld semantic defect detection algorithm based on integrated double U structure. Sci Rep. 2025;15(1):17849. doi:10.1038/s41598-025-02421-0. [Google Scholar] [PubMed] [CrossRef]
106. Zuo F, Liu J, Fu M, Wang L, Zhao Z. An X-ray-based multiexpert inspection method for automatic welding defect assessment in intelligent pipeline system. IEEE/ASME Trans Mechatron. 2025;30(3):1753–64. doi:10.1109/tmech.2024.3408337. [Google Scholar] [CrossRef]
107. Wang X, D’Avella S, Liang Z, Zhang B, Wu J, Zacherpel U, et al. On the effect of the attention mechanism for automatic welding defects detection based on deep learning. Expert Syst Appl. 2025;268(1):126386. doi:10.1016/j.eswa.2025.126386. [Google Scholar] [CrossRef]
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools