[BACK]
Computers, Materials & Continua
DOI:10.32604/cmc.2022.026855
images
Article

Multi-View Auxiliary Diagnosis Algorithm for Lung Nodules

Shi Qiu1, Bin Li2,*, Tao Zhou3, Feng Li4 and Ting Liang5

1Key Laboratory of Spectral Imaging Technology CAS, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an, 710119, P.R. China
2School of Information Science and Technology, Northwest University, Xi’an, 710127, P.R. China
3School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, P.R. China
4Institute of Education, University College London, London, The United Kingdom
5Department of Radiology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an, 10061, P.R. China
*Corresponding Author: Bin Li. Email: lib@nwu.edu.cn
Received: 05 January 2022; Accepted: 11 March 2022

Abstract: Lung is an important organ of human body. More and more people are suffering from lung diseases due to air pollution. These diseases are usually highly infectious. Such as lung tuberculosis, novel coronavirus COVID-19, etc. Lung nodule is a kind of high-density globular lesion in the lung. Physicians need to spend a lot of time and energy to observe the computed tomography image sequences to make a diagnosis, which is inefficient. For this reason, the use of computer-assisted diagnosis of lung nodules has become the current main trend. In the process of computer-aided diagnosis, how to reduce the false positive rate while ensuring a low missed detection rate is a difficulty and focus of current research. To solve this problem, we propose a three-dimensional optimization model to achieve the extraction of suspected regions, improve the traditional deep belief network, and to modify the dispersion matrix between classes. We construct a multi-view model, fuse local three-dimensional information into two-dimensional images, and thereby to reduce the complexity of the algorithm. And alleviate the problem of unbalanced training caused by only a small number of positive samples. Experiments show that the false positive rate of the algorithm proposed in this paper is as low as 12%, which is in line with clinical application standards.

Keywords: Lung nodules; deep belief network; computer-aided diagnosis; multi-view

1  Introduction

Lung nodules are the main lesions of the lungs. If they are not detected and treated in time, the malignant lung nodules will be transformed into lung cancer, which will seriously affect human life and health [13]. Lee et al. [4] proposed the random forest classification of lung nodules. Wu et al. [5] constructed a hierarchical learning network to extract lung nodules. Diciotti et al. [6] constructed a morphological model to segment lung nodules. Li et al. [7] established Principal Components Analysis (PCA) model to identify lung nodules. Song et al. [8] used the Local Binary Pattern (LBP) features to identify lung sessions. Song et al. [9] established a local optimal classification network to identify lung nodules. Teramoto et al. [10] used the cylindric nodule-enhancement filter to enhance the image information of lung nodules. Tariq et al. [11] introduced neuro fuzzy to identify lung nodules. De Carvalho Filho et al. [12] used quality threshold clustering, genetic algorithm and diversity index to detect solitary lung nodules. Parveen et al. [13] used Support Vector Machine (SVM) kernels to classify lung nodules. Hua et al. [14] applied the theory of deep learning to the classification of lung nodules. Shen et al. [15] established a multi-scale convolutional neural network to classify lung nodules. Sun et al. [16] distinguished lung nodules based on Three-Dimensional (3D) texture features of lung. Javaid et al. [17] distinguished the signs of lung nodules from the gray, geometric, statistical point of view. Qiu et al. [18] used Gestalt to detect nodules. Huang et al. [19] detected the lung nodules with 3D conventional neural networks. Shaukat et al. [20] combined multiple features to reduce the false detection rate of lung nodules. Han et al. [21] established a system of diameter and volume to judge benign and malignant lung nodules. Nishio et al. [22] used gradient tree boosting and Bayesian optimization to assist in the diagnosis of lung nodules. Xie et al. [23] realized automatic classification of lung nodules by fusion of multiple features at the decision-making level. Saien et al. [24] proposed sparse field level sets and boosting algorithms to reduce the false detection rate of lung nodules. Qiu et al. [25] detected lung nodules on Computed Tomography (CT) image. Qiu et al. [26] detection of solitary lung nodules based on brain-computer interface. Rey et al. [27] used CT studies based in soft computing to achieve lung nodule segmentation. Mittapalli et al. [28] build a multi-layer Multiscale Convolutional Neural Networks (CNN) to reduce the risk of false detection of lung nodules. Manickavasagam et al. [29] developed Computer Aided Diagnosis (CAD) software based on CNN to detect lung nodules. El-Askary et al. [30] constructed Random Forest optimization to target lung nodules.

In general, computer-aided detection of lung nodules is moving towards intelligent development, in which the research represented by deep learning framework is the focus of current research. The current algorithm research issues mainly focus on 1) how to enhance the stability of the deep learning framework. 2) In the training process, how to get a more complete training effect when the number of positive samples is limited. 3) How to obtain an effective feature fusion method to realize lung nodule recognition.

In response to the above problems, this paper 1) Improve the composition of the deep belief network and build a more stable structure. 2) Propose a multi-view model that conforms to the principles of vision, increase the number of positive samples, and balance the number of positive and negative samples. 3) Construct a lung nodule recognition algorithm based on multi-feature vector (FV) fusion.

2  Algorithm

The lung nodules are spherical-like in space, and exist as partially highlighted circles on the CT image. The diagnosis of lung nodules usually often divided into two parts: segmentation and recognition. Segmentation can obtain the suspected area of lung nodule [31] and recognition is to ensure a low missed detection rate of lung nodules while reducing the false positive rate [32]. We focus on the recognition part in this paper. Under the current common deep belief network structure, we build the algorithm flow chart as shown in Fig. 1. According to the principle of vision, a sample model is established from six perspectives, which can quickly present the spatial structure and improve the number of positive samples. Images with different view models are input into the improved depth belief network to get the eigenvectors. Then, a feature fusion algorithm is proposed to recognize lung nodules.

images

Figure 1: The algorithm flow chart

2.1 Three-Dimensional Reconstruction Algorithm

Lung nodules present a spherical shape in the lungs, which is an important feature for judging whether the area is a lung nodule or not. Therefore, it is necessary to reconstruct the suspected area in three dimensions. The Feldkamp-Davis-Kress (FDK) algorithm is the current mainstream algorithm for 3D reconstruction. The specific process is as follows: firstly, the two-dimensional projection data is weighted, then the weighted projection data of different projection angles are filtered, and finally the weighted back projection reconstruction along the ray direction is carried out.

The key step of FDK algorithm is filtering. The Shepp-Logan filter function [33] is usually used as follows:

hk={2π2(4k21),k(0,N2)2π2{4(Nk)21},k(N2+1,N1)(1)

where N is the filter width. Based on the morphological characteristics of three-dimensional lung nodules, a smooth function is constructed to reduce noise and interference from other tissues.

y(x)=exp(SxN)2x(0,N1)(2)

where S is an adjustable parameter.

When the voxel size of the reconstruction matrix is greater than the width of the filter, the reconstructed image cannot fully express the high-frequency information, and high-frequency aliasing occurs. For this purpose, a truncation function is constructed:

C(x)={1(1cosxπ2M)2x(0,M1)0others(3)

M=N×SizepSizev(4)

where Sizep is the pixel number of the filter; Sizev is the pixel number of the reconstructed voxel. The time-domain Shepp-Logan filter function is transformed into the frequency domain through Fast Fourier Transform (FFT) changes, and then a new filter function is constructed based on this:

FA=IFFT(FFT(h)×S×C)(5)

Through the above algorithm processing, the suspected lung nodule area is smoothed, but the contrast and edge features are suppressed to a certain extent. Therefore, it is proposed to design a high-frequency enhancement filter.

FB1(x)={1+b1{1exp(p1g1)4}g1π21+b1others(6)

where b1 and p1 are the parameters. When b1>0, high frequency enhancement can be realized, and p1 controls the frequency range of enhancement. g1 refers to the angle.

On the basis of FB1, the filter function is added iteratively, which can enhance different frequency bands.

FB(x)={FB1(x)+b2{1exp(p2g2)4}g2π2FB1(x)+b2others(7)

Therefore, the final filter function can be achieved:

F=IFFT(FFT(h)×S×C×FB)(8)

2.2 Improved Deep Belief Network

Deep Belief Network (DBN) is a generative model that allows the entire neural network to generate training data according to the maximum probability by training the weights between its neurons [34]. This network has received widespread attention since its inception. And a series of research and application carried out on it. Bu et al. [35] constructs DBN to learn high-level features. Shen et al. [36] introduced Boltzmann machines to constrain DBN. Khatami et al. [37] firstly reduced the dimensionality of medical data and then extracted high-level features through DBN. Zhong et al. [38] improved the process of fine-tuning to reduce never responding or always responding latent factors. Lu et al. [39] introduced a reconstruction error model to modify DBN to predict the probability of cardiovascular occurrence.

Due to the initial weight matrix between the last hidden layer and the classification layer of the deep belief network is randomly generated, which causes the weight matrix do not have the discriminative ability, and the feature cannot be guaranteed to be suitable for the classification task. Thus, we improve the deep belief network model, shown as in Fig. 2.

images

Figure 2: Improved DBN chart

The network consists Restricted Boltzmann Machine (RBM) and Latent Dirichlet Allocation (DLA). They input layer {h0}, hidden layer {h1, h2hN}, and Label is the classification layer. The number of input layer nodes is equal to the dimension of input samples, and the number of classification layer nodes is equal to the number of categories in the input sample set.

C-type training sample set is defined as X(i)={x1(i),x2(i),,xNi(i)}, i = 1,2…C, where Ni is the number of class i samples and xj(i) is the j-th sample in class i.

Latent Dirichlet Allocation (LDA) is an effective feature extraction method. Its purpose is to find the linear transformation matrix W, which maximizes the ratio of the inter-class dispersion to the intra-class dispersion,

Wo=argmaxw|WTSbW||WTSwW|(9)

where Wo is the optimal projection matrix. Sb is the matrix of dispersion between sample classes. Sw is the dispersion matrix within the sample class. So the process of solving Wo is transformed into the process of solving generalized matrix:

Sbw=λSwW(10)

Due to the rank limit problem of LDA, Rank (Sb) ≤ C-1, it shows that under Fisher criterion, only C-1 non-zero eigenvectors can be obtained, which does not meet the requirements. For this reason, we define a new matrix of inter class dispersion according to the two class problem:

Snb=i=1C1j=i+1C(1NiX(i)X(i)T1NjX(j)X(j)T)(11)

It can be seen that Rank(Snb) ≤ min(Rank(X), Rank(XT)) = Rank(X). For this reason, multiple discriminant projection vectors are obtained to meet the requirements of the number of nodes in the DBN classification layer. According to Eq. (9), the improved optimal optimization matrix Wo = [w1,w2,…,wc] is obtained.

2.3 The Multi-View Model Fusion

Lung nodules present a sphere-like structure in space and a cross-sectional structure on CT images. Recognizing lung nodules from a single location has a higher risk of misdetection and missed detection. And when using deep learning training, it is impossible to obtain enough positive samples. Therefore, we establish a multi-angle model that conforms to the principle of vision, which increases the number of positive samples while reducing the false detection rate and the missed detection rate. Blood vessels and trachea are the main interfering reasons during the detection of lung nodules. Because CT imaging is a tomographic scan, the tubular structures of blood vessels and tracheas will be truncated, and their cross-section will also be a round shape. This is similar to the two-dimensional morphology of lung nodules, which make the detection become difficult. Therefore, it is necessary to build a multi-directional model to carry out researches.

It is high risk of false detection and missed detection to identify lung nodules only from a single location [40]. When using deep learning training, we cannot get enough positive samples. Therefore, we build a multi-angle model to reduce the rate of false detection and missed detection while increasing the number of positive samples.

Because the size of lung nodules is not consistent, the lung nodules are normalized as a certain size. According to the axial, coronal and sagittal views, the lung nodule was identified by the section image. In order to increase the number of positive samples and achieve a balance between positive and negative samples, the characteristics of lung nodules were analyzed from three perspectives. However, only axial, coronal and sagittal sections cannot show the overall information of lung nodules. So we introduce the concept of perspective projection.

M(x,y)=Pm(x,y); when n=1m1Pn(x,y)==0,Pm(x,y)0(12)

where M(x,y) is the pixel value of the perspective projection image in (x,y). Pm(x,y) is the pixel value of the input image in the projection direction. This model is in accords with the principle of visual occlusion. The amount of calculation is greatly reduced compared with the 3D algorithm, while presenting the three-dimensional structure of the object.

Therefore, from the perspective of cube hexahedron, we construct a projection model to show the spatial structure. The six-view image is generated into feature vector (FV) after learning from DBN, and then the feature fusion strategy is established. Finally, the SVM classifier is connected to get the classification result.

In order to verify the performance of proposed algorithm, we build following fusion algorithms respectively, shown as in Fig. 3.

images

Figure 3: Fusion algorithms chart

TYPE1: Input the axial image, generate the feature vector, and then identify the lung nodules by SVM classifier.

TYPE2: Input the axial, coronal and sagittal images respectively (the coronal and sagittal images are generated from the axial images), and then use SVM classifier to judge the properties. It is better to choose the one with more modes for more accurate result.

TYPE3: Input one-view image, generate the feature vector, and then use SVM classifier to identify lung nodules.

TYPE4: Input three-view images, generate the feature vector, and then use SVM classifier to judge the properties. It is better to choose the one with more modes for more accurate result.

TYPE5: Input six-view images, generate the feature vector, and then use SVM classifier to judge the properties. It is better to choose the one with more modes for more accurate result.

TYPE6: Input six-view images, divide them into three groups to generate eigenvectors. Use SVM classifier to judge the properties. It is better to choose the one with more modes for more accurate result.

TYPE7: Input six-view images, generate feature vectors respectively and recognize lung nodules by SVM classifier.

3  Experiment and Result Analysis

300 sets of lung CT data are collected from the early international lung cancer action project [18] database. The size of the test and training datasets is 1:2. This database includes lung nodules and normal lung data, in which the data of lung nodules are labeled by doctor blind labeling method, and the data set is constructed. Also, the data in the database are taken down at different time and by different equipment to ensure a diversity of data and the reliability of the algorithm.

The program of the algorithm is implemented in the WIN7 system using VS2018. The detection speed is positively correlated with the complexity of the data and the amount of data, with an average of 31 s/sequence.

According to the difference of lung nodule scale, when the radius of lung nodule is less than 15 pixels, it is a small nodule, and when it is more than 30 pixels, it is a large nodule. Thus, we will classify lung nodules smaller than 152π pixel into cubes of 322 pixels, and lung nodules larger than 302π pixel into cubes of 642 pixels.

3.1 Parameter Selection

For the deep learning network we use, the input image is 512 × 512. And for images that do not meet the size requirements, it is normalized to 512 × 512. In this paper, the proposed algorithm includes the parameters of b1, p1, b2 and p2. To evaluate the performance of the filter, we introduce AOM and AVM to evaluate the combination of different parameters [41], which can build the relationship between the three-dimensional reconstructed area Rg and the marked area Rs by physician,

AOM=RsRgRsRg(13)

AVM=RsRgRs(14)

where AOM is proportional to the effect of the proposed algorithm. On the contrast, AVM is reversely proportional to the effect of the proposed algorithm.

As shown in Tab. 1, when b1, p1≠0, b2 and p2 = 0, only the first layer filter works. As b1 and p1 increase, when b1 = 3 and p1 = 12, AOM and AVM reach the peak values. On this basis, the second layer filter is added. When b1 = 3 and p1 = 15, AOM and AVM reach the peak values. It shows that the proposed algorithm can suppress the background and enhance the area of lung nodules.

images

We analyzed the filter response curve, as shown in Fig. 4. The traditional Shepp-Logan function can enhance the high frequency part, but the enhancement effect is limited. The similarity between the target and the background is not large enough. Through the selection of the above parameters, the proposed algorithm increases the difference between the target and the background, and reaches the peak value at the edge of lung nodules. Thus, effectiveness of the proposed algorithm is illustrated.

images

Figure 4: The filter response curve

3.2 Performance of Multi-View Fusion Algorithm

In order to verify the effect of different algorithms, we introduce ROC curve for measurement, as shown in Fig. 5. TYPE1: The three-dimensional features of lung nodules are ignored by the feature judgment of single section image. The representativeness of the selected image will directly affect the recognition effect. TYPE2: With increase of the profile, the risk of missing detection and false detection is reduced. However, the three-dimensional features of lung nodules cannot be fully displayed in the section structure.

images

Figure 5: ROC of different fusion algorithms

TYPE3: The one-view image is used as input to fuse part of three-dimensional information. The effect is better than single section image input. TYPE4, 5: With increase of the number of perspective images, more three-dimensional information is fused into the perspective images. The result of comprehensive judgment is the best, but with the increase of the number of classifiers, the speed decreases. TYPE6: Reducing the number of classifiers is helpful to improve the speed, but the grouping method will directly affect the classification results. TYPE7: In general, the lung nodule can be identified by inputting the six-view images together. When the number of classifiers is small.

A multi-view aided diagnosis algorithm based on small sample of lung nodules is proposed in this paper, and the characteristics of lung nodules have been analyzed. The model has been built according to the characteristics of high brightness and round shape of lung nodules. Seven types of connection frames for experiments have been proposed and experimental results shows that the seventh has the best result.

This method can be extended to other fields, but it cannot be applied in practice directly. In order to get better results, it is necessary to analyze the characteristics of the target to be detected.

3.3 Lung Nodule Recognition Performance

In order to verify the performance of the improved deep belief network algorithm, we measure the performance of different algorithms from sensitivity (SEN), specificity (SPE), false positive fraction (FPF) [42]

SEN=TPTP+FN(15)

SPE=TNTN+FP(16)

FPF=FP+FNTP+FP+TN+FN(17)

As shown in Tab. 2 and Fig. 6, the type 7 fusion algorithm is better than other algorithms in terms of performance. PCA algorithm [7] replaced the original features with fewer features. The new features are linear combinations of the old features. These linear combinations maximize sample variance and make the new features irrelevant to each other. This method is more sensitive in training category data. LBP algorithm [8] features local gray invariance and rotation invariance, but the lung nodule pattern cannot be well expressed by only a single feature. DBN [34] algorithm does not consider the initial weight matrix between the last hidden layer and the classification layer when building the network, resulting in the weight matrix does not have discrimination ability. EDBN algorithm [38] optimizes the fine-tuning algorithm to improve the accuracy. Our algorithm improves the initialization structure of hidden layer and classification layer, and optimizes the stability of the algorithm. It achieved nice recognition effect.

images

images

Figure 6: ROC curves of different fusion algorithms

The detection speed of the proposed algorithm is positively correlated with the number of lesions in the sequence. The average detection time of each sequence is less than 3 minutes, which greatly reduces the time of manual interpretation.

3.4 Algorithm Effect Display

We display the images of different angles of lung nodules, as shown in Fig. 7. We select Normal lung nodule (Fig. 7a), lung nodule with vascular adhesion (Fig. 7b) and blood vessel (Fig. 7c), Lung nodule and blood vessel are in the form of spheroids, but they cannot be distinguished from axial images. Lung nodules and blood vessels can be distinguished from V1–V6. The multi-view model can display the local texture information on the two-dimensional image, which makes the obtained features more abundant than the single section image.

images

Figure 7: Algorithm effect display

4  Conclusion

In order to meet the needs of high accuracy and low false positive rate of computer-aided detection of lung nodules, the traditional deep belief network was improved to enhance network stability. Also, a multi-view model that conforms to the principle of visual perception is proposed to balance the number of positive and negative samples. And establish a feature fusion mechanism to realize the extraction of lung nodules. On this basis, the development of the judgment of subsequent lung nodules signs has been promoted.

Funding Statement: This work was supported by Science and Technology Rising Star of Shaanxi Youth (No. 2021KJXX-61); The Open Project Program of the State Key Lab of CAD&CG, Zhejiang University (No. A2206); The China Postdoctoral Science Foundation (No. 2020M683696XB); Natural Science Basic Research Plan in Shaanxi Province of China (No. 2021JQ-455); Natural Science Foundation of China (No. 62062003), Key Research and Development Project of Ningxia (Special projects for talents) (No. 2020BEB04022); North Minzu University Research Project of Talent Introduction (No. 2020KYQD08).

Conflicts of Interest: Bin Li contributed equally to this work. The authors declare that they have no conflicts of interest to report regarding the present study.

References

 1.  Y. Gültepe, “Performance of lung cancer prediction methods using different classification algorithms,” Computers, Materials & Continua, vol. 67, no. 2, pp. 2015–2028, 2021. [Google Scholar]

 2.  A. T. Oliver, K. R. Jayasankar, T. Sekar, K. Devi, R. Shalini et al., “Early detection of lung carcinoma using machine learning,” Intelligent Automation & Soft Computing, vol. 30, no. 3, pp. 755–770, 2021. [Google Scholar]

 3.  T. Zhou, H. Lu, Z. Yang, S. Qiu, B. Huo et al., “The ensemble deep learning model for novel COVID-19 on CT images,” Applied Soft Computing, vol. 98, no. 10, pp. 106885, 2020. [Google Scholar]

 4.  S. L. A. Lee, A. Z. Kouzani and E. J. Hu, “Random forest based lung nodule classification aided by clustering,” Computerized Medical Imaging and Graphics, vol. 34, no. 7, pp. 535–542, 2010. [Google Scholar]

 5.  D. Wu, L. Lu, J. Bi, Y. Shinagawa, K. Boyer et al., “Stratified learning of local anatomical context for lung nodules in CT images,” in 2010 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, San Francisco, CA, USA, pp. 2791–2798, 2010. [Google Scholar]

 6.  S. Diciotti, S. Lombardo, M. Falchini, G. Picozzi and M. Mascalchi, “Automated segmentation refinement of small lung nodules in CT scans by local shape analysis,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 12, pp. 3418–3428, 2011. [Google Scholar]

 7.  R. Li, J. H. Lewis, X. Jia, T. Zhao, W. Liu et al., “On a PCA-based lung motion model,” Physics in Medicine & Biology, vol. 56, no. 18, pp. 6009, 2011. [Google Scholar]

 8.  L. Song, X. Liu, L. Ma, C. Zhou, X. Zhao et al., “Using HOG-LBP features and MMP learning to recognize imaging signs of lung lesions,” in 2012 25th IEEE Int. Sym. on Computer-Based Medical Systems (CBMS), Rome, Italy, pp. 1–4, 2012. [Google Scholar]

 9.  Y. Song, W. Cai, Y. Wang and D. D. Feng, “Location classification of lung nodules with optimized graph construction,” in 2012 9th IEEE Int. Sym. on Biomedical Imaging (ISBI), Barcelona, Spain, pp. 1439–1442, 2012. [Google Scholar]

10. A. Teramoto and H. Fujita, “Fast lung nodule detection in chest CT images using cylindrical nodule-enhancement filter,” International Journal of Computer Assisted Radiology and Surgery, vol. 8, no. 2, pp. 193–205, 2013. [Google Scholar]

11. A. Tariq, M. U. Akram and M. Y. Javed, “Lung nodule detection in CT images using neuro fuzzy classifier,” in 2013 Fourth International Workshop on Computational Intelligence in Medical Imaging (CIMI), Singapore, pp. 49–53, 2013. [Google Scholar]

12. A. O. De Carvalho Filho, W. B. De Sampaio, A. C. Silva, A. C. de Paiva, R. A. Nunes et al., “Automatic detection of solitary lung nodules using quality threshold clustering, genetic algorithm and diversity index,” Artificial Intelligence in Medicine, vol. 60, no. 3, pp. 165–177, 2014. [Google Scholar]

13. S. S. Parveen and C. Kavitha, “Classification of lung cancer nodules using SVM Kernels,” International Journal of Computer Applications, vol. 95, no. 25, pp. 25–28, 2014. [Google Scholar]

14. K. L. Hua, C. H. Hsu, S. C. Hidayati, W. H. Cheng and Y. J. Chen, “Computer-aided classification of lung nodules on computed tomography images via deep learning technique,” Onco Targets and Therapy, vol. 8, pp. 2015–2022, 2015. [Google Scholar]

15. W. Shen, M. Zhou, F. Yang, C. Yang and J. Tian, “Multi-scale convolutional neural networks for lung nodule classification,” in Int. Conf. on Information Processing in Medical Imaging, Cham, Springer, pp. 588–599, 2015. [Google Scholar]

16. W. Sun, X. Huang, T. L. Tseng, J. Zhang and W. Qian, “Computerized lung cancer malignancy level analysis using 3D texture features,” In Medical Imaging 2016: Computer-Aided Diagnosis, International Society for Optics and Photonics, vol. 9785, pp. 978538, 2016. [Google Scholar]

17. M. Javaid, M. Javid, M. Z. U. Rehman and S. I. A. Shah, “A novel approach to CAD system for the detection of lung nodules in CT images,” Computer Methods and Programs in Biomedicine, vol. 135, no. 1, pp. 125–139, 2016. [Google Scholar]

18. S. Qiu, D. Wen, Y. Cui and J. Feng, “Lung nodules detection in CT images using Gestalt-based algorithm,” Chinese Journal of Electronics, vol. 25, no. 4, pp. 711–718, 2016. [Google Scholar]

19. X. Huang, J. Shan and V. Vaidya, “Lung nodule detection in CT using 3D convolutional neural networks,” in 2017 IEEE 14th Int. Sym. on Biomedical Imaging (ISBI 2017), Melbourne, VIC, Australia, pp. 379–383, 2017. [Google Scholar]

20. F. Shaukat, G. Raja, A. Gooya and A. F. Frangi, “Fully automatic detection of lung nodules in CT images using a hybrid feature set,” Medical Physics, vol. 44, no. 7, pp. 3615–3629, 2017. [Google Scholar]

21. D. Han, M. A. Heuvelmans and M. Oudkerk, “Volume versus diameter assessment of small pulmonary nodules in CT lung cancer screening,” Translational Lung Cancer Research, vol. 6, no. 1, pp. 52, 2017. [Google Scholar]

22. M. Nishio, M. Nishizawa, O. Sugiyama, R. Kojima, M. Yakami et al., “Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization,” PloS One, vol. 13, no. 4, pp. e0195875, 2018. [Google Scholar]

23. Y. Xie, J. Zhang, Y. Xia, M. Fulham and Y. Zhang, “Fusing texture, shape and deep model-learned information at decision level for automated classification of lung nodules on chest CT,” Information Fusion, vol. 42, no. 2, pp. 102–110, 2018. [Google Scholar]

24. S. Saien, H. A. Moghaddam and M. Fathian, “A unified methodology based on sparse field level sets and boosting algorithms for false positives reduction in lung nodules detection,” International Journal of Computer Assisted Radiology and Surgery, vol. 13, no. 3, pp. 397–409, 2018. [Google Scholar]

25. S. Qiu, Q. Guo, D. Zhou, Y. Jin and T. Zhou, “Isolated pulmonary nodules characteristics detection based on CT images,” IEEE Access, vol. 7, pp. 165597–165606, 2019. [Google Scholar]

26. S. Qiu, J. Li, M. Cong, C. Wu, Y. Qin et al., “Detection of solitary pulmonary nodules based on brain-computer interface,” Computational and Mathematical Methods in Medicine, Article ID 4930972, pp. 10, 2020. [Google Scholar]

27. A. Rey, B. Arcay and A. Castro, “A hybrid CAD system for lung nodule detection using CT studies based in soft computing,” Expert Systems with Applications, vol. 168, no. 5, pp. 114259, 2021. [Google Scholar]

28. P. S. Mittapalli and V. Thanikaiselvan, “Multiscale CNN with compound fusions for false positive reduction in lung nodule detection,” Artificial Intelligence in Medicine, vol. 113, no. 5, pp. 102017, 2021. [Google Scholar]

29. R. Manickavasagam, S. Selvan and M. Selvan, “CAD system for lung nodule detection using deep learning with CNN,” Medical & Biological Engineering & Computing, vol. 60, no. 1, pp. 221–228, 2022. [Google Scholar]

30. N. S. El-Askary, M. A. M. Salem and M. I. Roushdy, “Features processing for Random Forest optimization in lung nodule localization,” Expert Systems with Applications, vol. 193, pp. 116489, 2022. [Google Scholar]

31. H. Shakir, T. M. Rasool and H. Rasheed, “3-D segmentation of lung nodules using hybrid level sets,” Computers in Biology and Medicine, vol. 96, no. 8, pp. 214–226, 2018. [Google Scholar]

32. N. Tajbakhsh and K. Suzuki, “Comparing two classes of end-to-end machine-learning models in lung nodule detection and classification,” Pattern Recognition, vol. 63, no. 9, pp. 476–486, 2017. [Google Scholar]

33. Y. Tsutsui, S. Awamoto, K. Himuro, Y. Umezu, S. Baba et al., “Characteristics of smoothing filters to achieve the guideline recommended positron emission tomography image without harmonization,” Asia Oceania Journal of Nuclear Medicine and Biology, vol. 6, no. 1, pp. 15–23, 2018. [Google Scholar]

34. G. E. Hinton, S. Osindero and Y. W. Teh, “A fast learning algorithm for deep belief nets,” Neural Computation, vol. 18, no. 7, pp. 1527–1554, 2006. [Google Scholar]

35. S. Bu, Z. Liu, J. Han, J. Wu and R. Ji, “Learning high-level feature by deep belief networks for 3-D model retrieval and recognition,” IEEE Transactions on Multimedia, vol. 16, no. 8, pp. 2154–2167, 2014. [Google Scholar]

36. F. Shen, J. Chao and J. Zhao, “Forecasting exchange rate using deep belief networks and conjugate gradient method,” Neurocomputing, vol. 167, pp. 243–253, 2015. [Google Scholar]

37. A. Khatami, A. Khosravi, C. P. Lim and S. Nahavandi, “A wavelet deep belief network-based classifier for medical images,” in Int. Conf. on Neural Information Processing Springer, Hong Kang, China, pp. 467–474, 2016. [Google Scholar]

38. P. Zhong, Z. Gong, S. Li and C. B. Schönlieb, “Learning to diversify deep belief networks for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 6, pp. 3516–3530, 2017. [Google Scholar]

39. P. Lu, S. Guo, H. Zhang, Q. Li, Y. Wang et al., “Research on improved depth belief network-based prediction of cardiovascular diseases,” Journal of Healthcare Engineering, vol. 2018, no. 3, pp. 1–9, 2018. [Google Scholar]

40. Y. Jiang, Y. Zhang, C. Lin, D. Wu and C. T. Lin, “EEG-based driver drowsiness estimation using an online multi-view and transfer TSK fuzzy system,” in IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 3, pp. 1752–1764, 2020. [Google Scholar]

41. Q. Tian, M. Cao, S. Chen and H. Yin, “Structure-exploiting discriminative ordinal multi-output regression,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 1, pp. 266–280, 2021. [Google Scholar]

42. S. Kim and D. Jun, “Artifacts reduction using multi-scale feature attention network in compressed medical images,” Computers, Materials & Continua, vol. 70, no. 2, pp. 3267–3279, 2022. [Google Scholar]

images This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.