iconOpen Access

ARTICLE

Explainable Context-Aware Fusion Network for Non-Small Cell Lung Cancer Analysis with Application to Smart Healthcare Systems

Muhammad Waqar1, Zeshan Aslam Khan1,*, Arthur Chang2,*, Zhishan Guo3, Chun-Liang Lai4, Chuan-Yu Chang5

1 International Graduate School of Artificial Intelligence, National Yunlin University of Science and Technology, Yunlin, Taiwan
2 Department of Information Management, National Yunlin University of Science and Technology, Yunlin, Taiwan
3 Department of Computer Science, North Carolina State University, Raleigh, NC, USA
4 Division of Pulmonology and Critical Care, Department of Internal Medicine, Dalin Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, No. 2, Minsheng Road, Dalin, Chiayi City, Taiwan
5 Department of Computer Science and Information Engineering, National Yunlin University of Science and Technology, Yunlin, Taiwan

* Corresponding Authors: Zeshan Aslam Khan. Email: email; Arthur Chang. Email: email

(This article belongs to the Special Issue: Artificial Intelligence Models in Healthcare: Challenges, Methods, and Applications)

Computer Modeling in Engineering & Sciences 2026, 147(1), 41 https://doi.org/10.32604/cmes.2026.078892

Abstract

Lung cancer (LC) is among the dangerous cancers spreading progressively, and a timely LC diagnosis becomes a dire need of the time. Various imaging-based studies have been conducted for accurate LC examination through computed tomography (CT), X-ray, and histopathology. Worldwide, the proportion of LC-affected patients in hospitals is growing, thereby increasing imaging data for fast processing and early examination. To facilitate histopathological imaging-based automated and timely decision making for accurate LC prediction, a Context Aware Fusion Network (CAFNet) for holistic feature learning and spatially localized feature learning is proposed in this study for the efficient extraction and processing of global as well as local features, respectively. CAFNet exploits histopathological tissues to ensure local and global attributes uniformity for extracting contextual information. The conducted research achieves histopathological image enhancement using median filtering (MF) and contrast-limited-adaptive-Histogram-Equalization (CLAHE). Moreover, the classifying power of the proposed CAFNet is enhanced through superior attributes extraction strategies, such as Mobile Inverted Bottleneck Convolution (MIBConv) employed with Spatial Attention with Residual Learning (SARL) and Channel Attention with Residual Learning (CARL). An innovative, partially adaptive optimization approach is utilized to fine-tune the degree of adaptivity in the learning process of the network. The descriptive behavior of CAFNet is explored through explainable artificial intelligence (XAI) strategies like Gradient-Weighted Class Activation Mapping (GradCAM) and Local Interpretable Model-Agnostic Explanation (LIME). The proposed network achieved an improved average classification accuracy of 7.36% while reducing models’ complexity by 85% to 99% as compared to the existing benchmark models. The study also addresses users’ accessibility challenges by providing a web-based interface using Gradio for users’ real-time interaction.

Keywords

Lung cancer; histopathology; context-aware network; explainable AI; web-based Interface

1  Introduction

Globally, the proportion of patients with cancer types like Lung cancer (LC) is increasing exponentially [1,2]. The growing mortality rate caused by LC makes it the most dangerous among other types of cancer [3]. The approximate 5-year survival rate of LC patients is around 10% for patients with advanced or non-surgical methods. Whereas 34% of patients are among those with 5-years survival rate. Dealing with advanced LC through histopathological image examination is critical for prognosis considerations due to its significance in providing insights regarding pathological abnormalities, cellular design, and tissue organization [4].

Histological characterization of LC is separated into non-small cell carcinoma (NSCLC) and small cell carcinoma (SCLC). The proportion of NSCLC present is more than 85%, whereas the rate of SCLC is less than 15% [5]. NSCLC is mainly categorized into three subtypes: adenocarcinoma, squamous cell carcinoma, and large cell carcinoma. Fig. 1 visually demonstrates lung cancer subtypes. Different imaging methods for the accurate LC examination are utilized, such as computed tomography (CT), X-ray, and histopathological examination. CT is considered one of the frequently operated imaging techniques for LC staging and preoperative analysis [6]. Another basic imaging technique for LC diagnosis is X-ray, with the limitation of capturing low-resolution images [7]. MRI is a famous method for high sensitivity and specificity analysis, but MRI is not among the recommended imaging methods for LC diagnoses [810]. The results obtained from the imaging methods discussed above provide a reference for the restaging, early diagnosis, and staging of LC, but they are not treated as a benchmark for the tumor clinical examination and qualitative staging. While LC tumor clinical diagnosis and qualitative staging are efficiently and accurately dealt with by histopathological analysis [11,12]. Furthermore, the focus of existing studies was to exploit global features (high-level) by overlooking local significant features (Low-level).

images

Figure 1: Histology of lung cancers.

In the present era, cancer diagnosis has been modernized through digital imaging strategies. Early cancer classification by utilizing histopathological images has significantly transformed digital pathology, especially for LC examination. Substantial improvements in imaging research from image features extraction (FE) [13,14] and image enhancement [15,16] to machine learning (ML) [17,18] and deep learning (DL) [1921] strategies have clearly demonstrated the advancement for the accurate imaging-based classification.

2  Related Work

The existing research regarding methods, significance, datasets, methodologies, and limitations for LC classification using histopathological images is provided in this section. The findings from the existing literature with different LC classification methods, mostly using the histopathological LC25000 dataset in comparison with the strengths and weaknesses of the proposed model in this study, are summarized in Table 1. The research gaps concluded from the existing studies in terms of accuracy, computational complexity, non-transparency, trade-off between convergence speed and precision, and lack of real-time deployment were the major sources of motivation behind the design of the proposed framework.

images

Kumar et al. [22] employed seven transfer learning-driven architectures for enhanced FE using histopathological images of the colon and lungs. DenseNet, among other transfer learning models, performed significantly and achieved a precision and accuracy of 98.63% and 98.60%, respectively. Another LC classification solution was proposed by Masud et al. [23]. The study presented a color junction contrast enhancement method using the unmasking (UM) strategy. FE was achieved through the FE of 2D wavelet characteristics and 2D Fourier characteristics. Finally, the extracted characteristics were classified through CNN using the histopathological dataset LC25000. The incredible accuracy achieved was 96.33%. A multi-input capsule network including two convolutional layer blocks was suggested by Ali and Ali [24]. The convolutional layer block (CLB) was developed for convolutional layers, and the separable convolutional block (SCLB) was designed for separable convolutional layers. The purpose of CLB and SCLB was to process unprocessed and pre-processed histopathological LC25000 images. Hatuwal and Thapa [25] investigated the potential of CNNs for the classification of different lung tissues, such as squamous-cell-carcinoma, adenocarcinoma, and benign tissue, through LC25000 histopathological images. The validation accuracy achieved by the CNNs was 97.20%.

Another modified pre-trained CNN variant, AlexNet, along with a contrast enhancement strategy, was exploited by Mehmood et al. [26] for diagnosing colon and lung cancers using the LC25000 dataset. Modified AlexNet with contrast enhancement resulted in an accuracy of 98.4%. A computer-aided diagnostic system with CNNs for Lung and colon cancer identification was proposed by Mangal et al. [27]. Histopathological images dataset LC25000 was exploited in the study and achieved 96% accuracy for colon and 97% accuracy for LC diagnosis. A DL-based diagnostic system was proposed by Civit-Masot et al. [28] for the detection of NSCLC using the LC25000 dataset. An explainable DL driven solution was proposed for highlighting the affected areas and the contribution of each class to the classification process. The proposed solution has reduced the processing time by achieving an accuracy of 97.22%. Mamun et al. [29] exploited ensemble learning methods for LC detection, such as LightGBM, AdaBoost, XGBoost, and Bagging. Ensemble learning methods were employed on a Kaggle database of 301 persons reflecting factors such as smoking-habits along with symptoms like chest pain, etc. In the study, XGBoost performed exceptionally with an accuracy of 94.42%. A multi-level CNN architecture was presented by Ramesh et al. [30] for diagnosing various LC types. The aim was to detect features from lung nodules of different morphologies and unequal sizes. In the study, the histopathological dataset LC25000 was used to achieve an overall accuracy of 89%.

Another novel strategy was proposed by Shanmugam and Rajaguru [31] for the LC prediction using the LC25000 dataset, utilizing histopathological images. They concentrated on preprocessing and segmentation along with FE through heuristic techniques such as grey wolf optimization and particle swarm optimization. Seven image classifiers were exploited for the classification of malignant and benign. The accuracy achieved was 91.57%. The classification and segmentation of LC through histopathological images was performed by Krishnan et al. [32] through an “improved graph neural network” (IGNN). The weight optimization was achieved using the green anaconda optimization algorithm (GAO). To categorize LC into five types, Phankokkruad [33] proposed three transfer learning-based networks, such as DenseNet201, ResNet50V2, and VGG16. The accuracy achieved respectively by three transfer learning architectures was 89%, 90%, and 62%, respectively.

Pradhan et al. [34] used an enhanced grasshopper optimization process with a random forest classifier for the classification of lung tissues by utilizing the LC25000 histopathological dataset, with the achieved accuracy of 98.50%. Setiawan et al. [35] suggested an LC classification model through convolutional neural networks (CNN) with gamma correction using the LC25000 dataset. CNN was used for attributes extraction and classification, whereas gamma correction was applied to adjust the light of the images. The highest accuracy of 87.16% was achieved through this model.

To facilitate histopathological imaging-based decision making regarding accurate LC prediction, a Context Aware Fusion Network (CAFNet) for holistic feature learning and spatially localized feature learning is proposed in this study for efficiently processing and extracting global and local features, respectively. The significant contributions of the study are summarized as follows:

•   Exploitation of MF with CLAHE enhances the lung histopathology by denoising and attaining clear contrast images.

•   CAFNet innovatively confirms the local and global feature uniformity to extract the contextual insights from the histopathological tissues, notably improving classification performance.

•   Incorporation of SARL and CARL equipped the model for a better, holistic, and focused feature learning process.

•   The proposed CAFNet architecture is critically designed and engineered to be computationally inexpensive for providing ease in deployment on the resource friendly devices.

•   The degree of adaptivity is carefully tuned in the PADAM optimization process for achieving the optimal performance by the proposed network.

•   Multiple XAI techniques are explored to offer valuable insights about the model’s decisive and descriptive behavior.

•   Deployment of the proposed network on the web-based interface using Gradio for addressing the accessibility challenges and allowing real-time inference and user interaction.

The proposed AI-based framework is a combination of both novel and well-established components. This study introduces a novel context-aware fusion of path-level SARL and CARL with holistic attention, and a systematic PADAM adaptivity study in the histopathology domain. Additionally, the well-established components like CLAHE, Grad-CAM, LIME, and Gradio are effectively utilized to design an automated smart healthcare application for the NSCLC analysis. Moreover, we would like to mention that the inspiring and promising outcomes of the EfficientNetV2 [36] variants in terms of training speed and stability were achieved due to the incorporation of fused MBConv operations at the later stages of the network, which address the concerns of previous EfficientNetV1 variants. Seeing these innovative exploitations, the placement and incorporation of modules like MIDConv in our proposed CAFNet is carefully decided based on prior knowledge and achievements.

3  Methodology

This section describes our proposed framework for NSCLC classification through histopathological images. Initially, the enhancement of histopathological images is performed by exploiting an effective configuration of image enhancement techniques. Afterwards, the detailed description of the proposed framework, the explainability of the network, and the smart healthcare application are discussed in this section. The graphical workflow of the conducted research is given in Fig. 2.

images

Figure 2: Graphical workflow of the proposed study.

3.1 Dataset Description

This study exploits a color image dataset, LC25000 [37], released by James A. Haley Veterans’ Hospital, Tampa, Florida, United States of America (USA). LC25000 contains histopathological color images of cancerous colon and lung cancers. However, this research only utilizes cancerous and benign lung tissue images. Lung carcinomas are the most predominant source of invasive cancer and a major cause of high mortality rates in America [38]. In the image acquisition phase of the LC25000 database, the rules of health insurance portability and accountability act (HIPAA) were completely followed, and the 750 color lung tissue images, including lung squamous cell carcinomas (lung-scc), benign lung tissues (lung-n), and lung adenocarcinoma (lung-aca), were acquired from microscopy slides [39].

Following this, an augmentor package in python programming language is used for the image augmentation of the pathology images [40]. The dataset with respect to lung cancer tissue images is expanded to 15000 color images, comprising 5000 lung tissue images for each class, including lung squamous cell carcinomas, benign lung tissues, and lung adenocarcinoma. Later, the images were resized to 224 × 224 pixels, and the labels were encoded using a one-hot encoder style for our multi-classification task. The data used in this study from the LC25000 is publicly available, fully de-identified, validated, and HIPAA compliant. LC25000 is widely used and acknowledged for its effective augmentation-driven expansion but also one of its core limitations due to limited patient diversity. Therefore, we would like to clarify that the proposed research and its promising outcomes reflect the enriching power of the proposed framework in pattern recognition within this benchmark, not immediate clinical generalization.

3.2 Histopathology Enhancement

Preprocessing plays an important role in refining the data crucial for correct classification. For image enhancement, two effective filtration techniques were used in this study, including CLAHE and MF. The most common challenge in the benchmark histogram equalization strategies is the amplification of noise during the enhancement process [41]. However, CLAHE showed promising outcomes in enhancing the quality of the images by improving contrast without significant noise amplification. The prime aim of CLAHE is to prevent darkness and cover bright regions in the image. It works by separating the contextual regions of the image and employing histogram equalization to each region. This region-specific strategy enables precise contrast enhancement for each region based on the specified needs. To address the overamplification concerns, CLAHE utilizes a clip threshold on the histogram equalization procedure within the regions. This threshold Tclip, controls contrast adjustment by lowering the intensification effects differently for each region. The process includes the conversion of images into lightness, green-red, blue-yellow (LAB) color space, to detach the Lchannel, which carries the most useful information for contrast adaptation. The process of applying CLAHE to this channel is given in Eq. (1).

L=CLAHE(L)=L(TclipTmax1)×(LL¯)(1)

where L refers to the real brightness value, L corresponds to the adjusted brightness value, L¯ is the average brightness of the bordering regions, Tclip is the threshold clip limit and Tmax is the uppermost histogram score allowed per region.

Additionally, a median filter is exploited in this study for removing the noise in the histopathological images. Median filters have shown promising results in the noise removal process of medical imaging datasets. MF lowers image noise by substituting outlier pixel values with the median of its adjacent values. The MF working process depends on a single parameter, which is the size of the neighborhood window.

Fig. 3 shows the quality difference between the original and the enhanced histopathology image using CLAHE and MF. We have performed critical hyperparameter tuning associated with CLAHE and MF to obtain the best possible image enhancement. Fig. 4 shows the sample of enhanced images under different parameter values. The optimal hyperparameter values were a window size of 5, a tile size of 8, and a threshold of 40.

images

Figure 3: Quality difference between the original and the enhanced histopathology image.

images images

Figure 4: Hyperparameter tuning of CLAHE and MF with respect to tile size, window size, and clip threshold.

3.3 Context Aware Fusion Network (CAFNet)

Existing medical image processing approaches often struggle to strike a balance between local and global feature extraction. Therefore, the suggested CAFNet is critically designed to overcome these constraints. The detailed pseudo code and the graphical overview of the CAFNet are given in Table 2 and Fig. 5, respectively. Additionally, Table 3 presents an extensive architectural insight into the proposed network with respect to the positioning and repetition of the blocks in the designed network. After the effective preprocessing of the data, the images will undergo multiple MIBConv layers for the initial feature extraction process. The MIBConv layer utilizes depthwise separable convolutions, which are both effective and computationally inexpensive operations. Afterwards, the network is divided into two different phases. The first phase includes the division of feature maps into four patches. Thereafter, each patch separately moves to SARL and CARL techniques for spatially localized feature learning (SLFL). These modules are carefully designed to focus on different regions in the image and highlight the most prominent ones. The outcomes of these attention modules are then passed to a global average pooling (GAP) operation for compressing and summarizing the information into a manageable style. Subsequently, the second phase of the network emphasizes holistic feature extraction (HFE) by focusing on the entire image instead of patches. The second phase uses the same attention modules but on a broader scale. These two pathways helped the network to capture both the local and global features through two independent learning schemes. Following this, both sets of features are concatenated and passed to the specified fully connected layers for further refining of the features and representation process. After the complete learning scheme of the network, it uses SoftMax activation operation for classification purposes. This extensive configuration of the modules sets CAFNet as a potential solution in medical image classification, providing noteworthy gains in accuracy and efficiency.

images

images

Figure 5: Overview of the proposed CAFNet architecture demonstrating dual phase system for both local and global feature learning.

images

3.3.1 MIBConv Operation

The MIBConv layers are the initial components of the proposed CAFNet, which greatly enhances the effectiveness of the architecture in processing and analyzing histopathological images. This block mainly consists of three components: a 1 × 1 convolution for extending the number of channels, as shown in Eq. (2), a depthwise separable convolution, as shown in Eq. (3), and a 1 × 1 convolution again to map the channel back to dimensional space, given in Eq. (4).

Zexpand=(A1I+b1)(2)

Zdepth=(AdepthZexpand+bdepth)(3)

Zproject=A2Zdepth+b2(4)

where A1 refers to the score matrix, I correspond to the input, and b1 are the ReLU activation function and bias. In the proposed CAFNet, MIBConv layers are used for refining features effectively from both local and context in images. The hierarchical diagram of the MIBConv block is presented in Fig. 6.

images

Figure 6: Workflow of the MIBConv block for initial feature extraction in CAFNet.

3.3.2 Spatial Attention with Residual Learning (SARL)

The SARL mechanism emphasizes the spatial pattern of features given in the input feature maps, as demonstrated in Fig. 7. It determines which region to focus within the images, enabling the network to focus more on the prominent features leading to the main differentiation factor. Eqs. (5) and (6) show the mathematical representation of the SARL module in terms of the transition from the original feature map to the spatially attentive feature map.

Sspatial=σ(Conv(P))(5)

Iattentive=SspatialP(6)

where P is the input feature map, σ denotes the sigmoid operation and ⊙ refers to the element-wise multiplication. The spatial attention Sspatial is mapped with the original feature map through element-wise multiplication for emphasizing or suppressing features.

images

Figure 7: Flow illustration of the SARL block in the proposed CAFNet.

3.3.3 Channel Attention with Residual Learning (CARL)

On the other hand, CARL emphasizes the channel-wise features of the images as represented in Fig. 8. Firstly, GAP is used to obtain channel-wise statistics from the input feature map P. Afterwards, a pooled version undergoes a sigmoid operation for generating channel attentive feature maps. Following this, the attentive feature maps are applied to the original feature maps for scaling the features per channel. The above steps are mathematically presented in Eqs. (7)(9).

X=GAP(P)(7)

Schannel=σ(Dense(X))(8)

Pattentive=Schannel×P(9)

images

Figure 8: Flow illustration of the CARL block in the proposed CAFNet.

Initially, the network starts with the convolutional layer as a base feature extraction for each input image M. Each input is passed through a transformation of Conv2D as shown in Eq. (10), where refers to the convolutional function, BN is the batch normalization. is relu activation operation.

Ibase=∝(BN(AMi+b))(10)

Afterwards, the spatially localized feature learning and holistic feature extraction are performed for effective learning at local and broader scales. The output acquired from both attention modules is further pooled using GAP, as shown in Eqs. (11) and (12). Thereafter, the information from both patch-based maps and global feature maps is concatenated. The final feature vector is determined by adding the output feature vector of SARL and CARL blocks, as shown in Eq. (13). Lastly, the combined features undergo the network of fully connected layers to classify the NSCLCs, as given in Eq. (14).

PSARLglobal=GAP(SARL(Pglobal)(11)

PCARLglobal=GAP(CARL(Pglobal)(12)

Pcombined=PSARL+PCARL(13)

Ifinal=Dense128((Dense256(Pcombined)))(14)

3.4 Interpretability of the Proposed Network

The transparency [42] of the solution is critical in healthcare disciplines for building trust in AI-based diagnostic tools. For that reason, we have exploited the concept of explainable AI to produce a transparent and human-understandable solution for interpretable lung cancer analysis. Explainable AI (XAI) [43] provides interpretability by offering key findings on the decision-making process and qualitative insights into the solution. The transparency of the proposed network is analyzed through two different XAI-based approaches, mainly Grad-CAM and LIME, which show promising results in elaborating the decisive behavior of the AI-based models. Both interpretability schemes helped to identify the most prominent regions within the images and contributed mostly to the specified prediction of the model. Table 4 shows the complete working procedure of Grad-CAM in explainable visualization of the network predictions [44]. Similarly, the schematic diagram of LIME-based explainable AI is given in Fig. 9. The optimization objective of LIME involves two loss functions. The key objectives in the optimization procedure of LIME are (I) to reduce the difference between simple and complex models, (II) to control the complications of the simple model for providing interpretability. The prime aim of LIME [45] is to ensure interpretability, transparency, and local faithfulness by reducing the first loss term and maintaining the second loss as minor enough to be understandable for a human.

images

images

Figure 9: Schematic diagram of LIME-based explainable artificial intelligence.

3.5 Partially Adaptive Moment Estimation (PADAM)

PADAM is a modified version of Adam, which bridges the gap between convergence and generalization capability [46] of the optimizer. It presents a new factor Pt, to modify the degree of adaptivity in the optimization strategy. The pseudo-code-based mathematical intuition behind the PADAM is presented in Table 5.

images

In Table 5, bt represents the resulting stochastic gradient and e^t denotes the moving average of 2nd degree momentum for the computed stochastic gradient. The core differentiator depicted from Table 5 with respect to standard Adam [47] is the addition of “partially adaptive term (Pt)” by the 2nd degree momentum:

ωt+1=ωtβtnte^tPt,Heree^t=max(e^t1,e^t)(15)

PADAM transforms the standard Adam optimizer by varying the 2nd momentum explored in the adaptive learning rate. The overall update rules for both standard Adam and suggested PADAM are given below:

ωt+1=ωtσ.nte^t+ε (Adam)(16)

where, nt and et are 1st and 2nd order momentum estimates, σ is the learning rate (LR) and ε is a constant term.

ωt+1=ωtσ.nte^tPt+ε (PADAM)(17)

In Eq. (17), e^t is raised to the power of Pt (0Pt1), which makes the learning rate partially adaptive. Eq. (16) uses e^t for an adjustable learning rate, which occasionally resulted in a poor generalization outcome. However, PADAM introduces a partially adjustable term (Pt) which tunes the supremacy of et. This addition can decrease the adaptiveness of learning trends to enhance the generalization properties. Through the control of adaptiveness, PADAM intends to bridge the gap between convergence and generalization. Exploration of partially adaptive term (Pt) permits for a more supervised convergence procedure, evading the excessive adaptiveness which leads to obstructed convergence.

4  Simulations and Results

This section briefly offers the performance analysis of the CAFNet with different adaptivity factors of the PADAM and validation of the proposed network for the NSCLC classification. The LC25000 dataset was divided into three sets of training, validation, and testing with a ratio of 70:10:20. Both the on-hold and k-fold validation strategies are used to evaluate the effectiveness of the proposed network. All simulations were conducted using the deep learning TensorFlow framework. This research was conducted on an HP system with an octa-core processor and an Intel i7 9th-generation CPU. CAFNet was trained for 20 epochs for all optimization variations, and the average training time was almost 138 s per epoch. The proposed CAFNet is trained using an optimal learning rate of 0.001, batch size of 32, and loss function of categorical cross-entropy, which is widely used for multi-classification problems. Additionally, the interpretability of the proposed approach is extensively analyzed through effective explainable AI techniques. The performance of the network is validated through different evaluation measures such as accuracy, precision, recall, f1-score, and area under the receiver operating characteristic curve for rigorous performance evaluation essential to healthcare applications. During the inference stage of the research, it was observed that the trained CAFNet with optimal hyperparameters required approximately 25 to 28 ms per image, ensuring promising real-time performance appropriate for healthcare disciplines.

4.1 CAFNet’s Performance Analysis

The proposed CAFNet is executed with a standard Adam optimizer and four different adaptivity terms of the PADAM optimization approach under the same setting to attain the optimal degree of adaptivity with respect to the learning rate. Table 6 shows the brief performance analysis of the CAFNet with Adam and PADAM different Pt values. Initially, the CAFNet was trained with a standard Adam optimizer for 20 iterations, which resulted in an accuracy of around 89%. Later, the network was executed with a PADAM (Pt=0.25) for 20 epochs, which showed satisfactory classification performance by achieving a test loss and accuracy of 0.1523 and 0.9390. Following this, the degree of adaptivity further increased from 0.25 to 0.5, which resulted in an improved diagnostic performance. With Pt=0.5, the CAFNet attained a substantial test loss and accuracy of 0.7779 and 0.9717. Similarly, the performance in terms of other evaluation metrics like precision, recall, and f1-score for each category of cancer was noteworthy, showing the generalizability of the proposed approach. Afterwards, the degree of adaptivity is further increased to 0.75, which showed the decremental trend in the learning behavior as compared to previous adaptivity factors. With Pt=0.75, the proposed network saw a significant drop in predictive performance by attaining an accuracy of 0.9417. The path towards the divergence behavior was quite visible through this degree of adaptiveness in the optimization process. However, further analysis verifies the performance of CAFNet with Pt=1.0, which provided us with a clear idea about the complete divergence in the optimization process, as shown in Table 6. The variations in the learning behavior of the proposed network with different Pt values are presented through the learning curves given in Fig. 10. A similar trend of adaptiveness in the PADAM optimization can also be observed through the AUC-ROC presented in Fig. 11, which showed the superiority of the proposed CAFNet with PADAM (Pt=0.5) as compared to its counterparts. To sum up the discussion on the identification of an optimal degree of adaptiveness in the optimization process, Fig. 12 presents the confusion matrix at four different Pt values. Hence, we concluded that the proposed CAFNet model showed optimal classification performance with PADAM (Pt=0.5).

images

images

Figure 10: Change in the learning behavior of the CAFNet with different Pt values.

images

Figure 11: AUC-ROC curves of CAFNet with different Pt values.

images images

Figure 12: Confusion Matrix of the Proposed CAFNet with (a) PADAM(Pt=0.25), (b) PADAM(Pt=0.5), (c) PADAM(Pt=0.75), (d) PADAM(Pt=1.0).

4.2 Validation of CAFNet with K-Fold Validation

After the performance analysis through the on-hold validation approach, we have further performed a 5-fold validation technique by splitting the dataset into multiple training and testing sets to prove the effectiveness, reliability, and generalizability of the proposed model. This validation is quite useful in evaluating the predictive performance of the deep learning networks, particularly in healthcare applications. Table 7 shows the detailed performance overview at different folds and iterations, which provides crucial insights into the effectiveness of CAFNet as a classifier for NSCLC classification. The model demonstrates superior classification proficiencies by showing a consistent average accuracy of around 0.95. For each fold, the performance of the CAFNet was observed at four different iterations with respect to both loss and accuracy. Additionally, the mean and standard deviation (std) of folds were also reported for each iteration stage. The model showed consistent accuracy across iterations, proving the enriched convergence of the proposed approach. Similarly, the low std across loss and accuracy displays the steadiness and robustness of the proposed model for the task at hand. Table 8 shows the progressive rise in the classification accuracy with the inclusion of each component in the proposed framework. It is worth noting that the addition of each component contributes significantly to the classification performance of the proposed CAFNet. However, the comprehensive analysis demonstrated that the integration of suggested components is critical for obtaining optimal results.

images

images

4.3 Interpretability of the CAFNet with LIME and Grad-CAM

This study exploits two different approaches to explainable AI, such as LIME and Grad-CAM, to provide crucial insights into the decisive behavior of the proposed network. Despite the emergence of AI, the transparency of the deep networks is still the key point of concern, particularly in sensitive applications like healthcare. It’s necessary to provide the human understandable explanations on the model’s qualitative insights and decision-making process. Fig. 13 presents a LIME-based explanation of the CAFNet’s prediction for the given histopathological image. The image shows the original and the highlighted map, providing a super pixel-level explanation of the given prediction. The regions highlighted in yellow circles indicate the most prominent and influential regions behind the specified prediction of lung squamous cell carcinoma. Additionally, the feature contribution plot in the explanation offers more crucial insights on the approximated positive weights, showing the contribution of these superpixel-based regions. Through this explanation, we can conclude the most significant aspects of the images and the decision behavior of the proposed model. Similarly, Fig. 14 shows the Grad-CAM-based explanation of the CAFNet’s decision-making process. The Grad-CAM heatmap dominantly highlights the most contributed spatial regions of the given histopathology image with respect to the lung squamous cell carcinoma. The red/yellow color scheme on the heatmap shows the most contributed regions, and other cooler colors indicate the regions with minimal or no contributions. The Grad-CAM overlay provides an umbrella view of the decisive model by mapping the heatmap on the original image and highlighting the most significant regions for human understandability.

images

Figure 13: LIME-based explanation on the CAFNet’s prediction.

images

Figure 14: Grad-CAM-based explanation on the CAFNet’s prediction.

4.4 t-Distributed Stochastic Neighbor Embedding (t-SNE) Visualization of CAFNet with Respect to Different Pt Values

To verify the discriminative abilities of the proposed network, this study utilized t-SNE driven two-dimensional visualization approach [48]. Initially, the principal component analysis (PCA) is used to reduce the dimensionality of the learned feature vectors from the final dense layers, which is better for the stabilization and the clear visualization of the high-dimensional space. Afterwards, the 2D embeddings are created for the projection of the learned features. Fig. 15 shows the detailed t-SNE visualization of the proposed CAFNet with different Pt values. It can be observed that the proposed network along PADAM (Pt=0.5) shows the best performance by producing very compact and separated clusters as compared to its counterparts. These qualitative analyses clearly demonstrate the enriching capabilities of the CAFNet in increasing the inter-class and decreasing intra-class relationships.

images

Figure 15: t-SNE visualization of the Proposed CAFNet with (a) PADAM(Pt=0.25), (b) PADAM(Pt=0.5), (c) PADAM(Pt=0.75), (d) PADAM(Pt=1.0).

4.5 Deployment on the Web-Based Interface Using Gradio

To overcome the accessibility concerns, we have deployed our proposed CAFNet on a web-based user interface using a Python library Gradio [49]. The Gradio-driven web interfaces are helpful in bridging the gap between academic research and clinical usage. Through the designed web-based interface, the patients, researchers, and healthcare professionals can easily access our proposed solution for receiving real-time predictions on their histopathological domain images. The designed interface is user-friendly, equipped with real-time interaction leading to more reliable and fast response. With the help of this smart healthcare system, we would be critically monitoring the performance of the proposed network on the diverse histopathology data for the NSCLC classification. Fig. 16 shows our designed web-based interface using the proposed CAFNet architecture, requiring a histopathology domain image from the user to provide the prediction, confidence in the given prediction, and the Grad-CAM-driven explanation on the given user input. This smart healthcare-based application will be extremely helpful for us to improve the performance of the model as per the needs and feedback of the users. Fig. 17 presents the overall information flow of the application from the given input to the final prediction by our proposed model.

images

Figure 16: Deployment of the Proposed CAFNet on a web-based interface using Gradio.

images

Figure 17: Visualization of the complete information flow in the designed smart healthcare application for NSCLC classification.

4.6 Performance Comparison with the Existing Methods

The outcomes of the study were quite promising with respect to the successful development of a smart healthcare system for NSCLC classification through histopathology. Following this, we have compared the performance of the proposed network with the existing state-of-the-art models for NSCLC classification, given in Table 9. It is depicted that the CAFNet has shown superior performance by outperforming its counterparts in terms of the standard evaluation measures like accuracy (ACC), precision (PRS), Recall (RC), and f1-score (F1S). The dashes (- - -) present in the comparison table indicate the evaluation measures not reported in the existing works. Similarly, all the comparisons with the existing methodologies are reported at the benchmark level and are not protocol-matched. The proposed network has achieved an impressive accuracy of 97.17% for the task at hand. Additionally, the computational complexity of the network is also compared with the existing models in terms of the model’s architectural parameters (AP) in millions and model size (MS) in MB. From Table 9, it is quite evident that the proposed model is impressively efficient with respect to the computational complexity as compared to its counterparts. Hence, it was concluded that the proposed CAFNet serves as an accurate, efficient, and computationally inexpensive solution for the NSCLC classification task.

images

5  Conclusions and Future Directions

This research successfully introduces a framework for interpretable AI-driven NSCLC diagnostics. It is observed that lung histopathology images are truly enhanced by integrating MF and CLAHE through critical hyperparameter tuning for optimal denoising and clear contrast images. In addition, to improve prediction accuracy, the proposed CAFNet architecture captures contextual information encapsulated in histopathology tissues by the uniformity of using detailed information through local features combined with global features for achieving a wider viewpoint. Furthermore, the superior diagnosis capability of CAFNet is achieved by tuning the degree of adaptivity for the PADAM optimizer, and the optimal convergence speed is accomplished at Pt=0.5. The proposed framework outperforms the existing benchmark studies with an improved average accuracy of around 7.36%, which is surely a great gain for the problem at hand. The study paves a way for a generalizable and trustworthy artificial intelligence in disease diagnostics by genuinely providing transparency in decision-making through two post-hoc XAI strategies, namely LIME and Grad-CAM. Moreover, this study successfully deployed the proposed network into a web-based interface using Gradio, easily accessible to the patient and healthcare professionals. The proposed framework has shown substantial outcomes, but it should still be considered as a research prototype. The external validation on the real-time patient’s data, which is also a continuation of this research direction, is required before the proposed system can be reliably deployed in healthcare environments.

In the near future, we aim to further improve the performance of the designed smart healthcare application using the feedback of the users. Furthermore, our future research direction includes the design of an intelligent AI network equipped to understand two different data languages. For instance, the goal is to build a system capable of analyzing lung cancer through both computed tomography and histopathology images.

Acknowledgement: We would like to sincerely thank National Science and Technology Council (NSTC), Taiwan, and Intelligent Recognition Industry Service (IRIS) Center, Taiwan for supporting this research.

Funding Statement: This work was supported in part by the National Science and Technology Council (NSTC), Taiwan, under project number 114WFA2610132 (NSTC 114-2221-E-224-020) and in part by the “Intelligent Recognition Industry Service Center” from the Featured Areas Research Center-Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan.

Author Contributions: The author contributions to this paper are as follows: original draft and validation, conceptualization and methodology: Muhammad Waqar; supervision, software and formal analysis: Zeshan Aslam Khan; formal analysis, funding acquisition: Arthur Chang; review, investigation and visualization: Zhishan Guo; writing, review, editing and supervision: Chuan-Yu Chang; validation; Chun-Liang Lai. All authors reviewed and approved the final version of the manuscript.

Availability of Data and Materials: The medical imaging data utilized in this research can be found from the relevant references cited in this paper. Furthermore, the related data or material can be made available upon reasonable request by contacting the corresponding authors of this paper.

Ethics Approval: Not applicable.

Conflicts of Interest: The authors declare no conflicts of interest.

References

1. Lu W, Sun J, Jing Y, Xu J, Huang C, Deng Y, et al. Combined use of gefitinib and bevacizumab in advanced non-small-cell lung cancer with EGFR G719S/S768I mutations and acquired C797S without T790M after osimertinib: a case report and literature review. Curr Oncol. 2025;32(4):201. doi:10.3390/curroncol32040201. [Google Scholar] [PubMed] [CrossRef]

2. Elhassan SM, Darwish SM, Elkaffas SM. An enhanced lung cancer detection approach using dual-model deep learning technique. Comput Model Eng Sci. 2025;142(1):835–67. doi:10.32604/cmes.2024.058770. [Google Scholar] [CrossRef]

3. Jani CT, Kareff SA, Morgenstern-Kaplan D, Salazar AS, Hanbury G, Salciccioli JD, et al. Evolving trends in lung cancer risk factors in the ten most populous countries: an analysis of data from the 2019 Global burden of disease study. EClinicalMedicine. 2025;79(4):103033. doi:10.1016/j.eclinm.2024.103033. [Google Scholar] [PubMed] [CrossRef]

4. Shamas S, Panda SN, Sharma I, Guleria K, Singh A, Ali AlZubi A, et al. An improved lung cancer segmentation based on nature-inspired optimization approaches. Comput Model Eng Sci. 2024;138(2):1051–75. doi:10.32604/cmes.2023.030712. [Google Scholar] [CrossRef]

5. Hendriks LEL, Remon J, Faivre-Finn C, Garassino MC, Heymach JV, Kerr KM, et al. Non-small-cell lung cancer. Nat Rev Dis Primers. 2024;10(1):71. doi:10.1038/s41572-024-00551-9. [Google Scholar] [PubMed] [CrossRef]

6. Draelos RL, Dov D, Mazurowski MA, Lo JY, Henao R, Rubin GD, et al. Machine-learning-based multiple abnormality prediction with large-scale chest computed tomography volumes. Med Image Anal. 2021;67:101857. doi:10.1016/j.media.2020.101857. [Google Scholar] [PubMed] [CrossRef]

7. Miki T, Yano S, Hanibuchi M, Sone S. Bone metastasis model with multiorgan dissemination of human small-cell lung cancer (SBC-5) cells in natural killer cell-depleted SCID mice. Oncol Res. 2000;12(5):209–17. doi:10.3727/096504001108747701. [Google Scholar] [PubMed] [CrossRef]

8. Kang Y, Fang Y, Lai X. Automatic detection of diabetic retinopathy with statistical method and Bayesian classifier. J Med Imaging Hlth Inform. 2020;10(5):1225–33. doi:10.1166/jmihi.2020.3025. [Google Scholar] [PubMed] [CrossRef]

9. Wang W, Zhang L. Magnetic resonance imaging manifestations of brain metastases in patients with lung cancer. J Med Imaging Hlth Inform. 2020;10(12):2985–8. doi:10.1166/jmihi.2020.3248. [Google Scholar] [PubMed] [CrossRef]

10. Szabó M, Bozó A, Darvas K, Soós S, Őzse M, Iványi ZD. The role of ultrasonographic lung aeration score in the prediction of postoperative pulmonary complications: an observational study. BMC Anesthesiol. 2021;21(1):19. doi:10.1186/s12871-021-01236-6. [Google Scholar] [PubMed] [CrossRef]

11. Sari CT, Gunduz-Demir C. Unsupervised feature extraction via deep learning for histopathological classification of colon tissue images. IEEE Trans Med Imag. 2019;38(5):1139–49. doi:10.1109/TMI.2018.2879369. [Google Scholar] [PubMed] [CrossRef]

12. Sudharshan PJ, Petitjean C, Spanhol F, Oliveira LE, Heutte L, Honeine P. Multiple instance learning for histopathological breast cancer image classification. Expert Syst Appl. 2019;117(9):103–11. doi:10.1016/j.eswa.2018.09.049. [Google Scholar] [CrossRef]

13. Zhang Y, Zhang X, Zhu W. ANC: attention network for COVID-19 explainable diagnosis based on convolutional block attention module. Comput Model Eng Sci. 2021;127(3):1037–58. doi:10.32604/cmes.2021.015807. [Google Scholar] [CrossRef]

14. Ponnarengan H, Rajendran S, Khalkar V, Devarajan G, Kamaraj L. Data-driven healthcare: the role of computational methods in medical innovation. Comput Model Eng Sci. 2025;142(1):1–48. doi:10.32604/cmes.2024.056605. [Google Scholar] [CrossRef]

15. Dinh PH, Giang NL. A new medical image enhancement algorithm using adaptive parameters. Int J Imag Syst Technol. 2022;32(6):2198–218. doi:10.1002/ima.22778. [Google Scholar] [CrossRef]

16. Alalwan N, Taloba AI, Abozeid A, Alzahrani AI, Al-Bayatti AH. A hybrid classification and identification of pneumonia using African buffalo optimization and CNN from chest X-ray images. Comput Model Eng Sci. 2024;138(3):2497–517. doi:10.32604/cmes.2023.029910. [Google Scholar] [CrossRef]

17. Khan ZA, Waqar M, Raja MJAA, Chaudhary NI, Khan ATMA, Raja MAZ. Generalized fractional optimization-based explainable lightweight CNN model for malaria disease classification. Comput Biol Med. 2025;185(4):109593. doi:10.1016/j.compbiomed.2024.109593. [Google Scholar] [PubMed] [CrossRef]

18. Alnuaimi MN, Alqahtani NS, Gollapalli M, Rahman A, Alahmadi A, Bakry A, et al. Transfer learning empowered skin diseases detection in children. Comput Model Eng Sci. 2024;141(3):2609–23. doi:10.32604/cmes.2024.055303. [Google Scholar] [CrossRef]

19. Waqar M, Khan ZA, Khawaja ST, Chaudhary NI, Khan S, Cheema KM, et al. Explainable clinical diagnosis through unexploited yet optimized fine-tuned ConvNeXt Models for accurate monkeypox disease classification. SLAS Technol. 2025;33(5):100336. doi:10.1016/j.slast.2025.100336. [Google Scholar] [PubMed] [CrossRef]

20. Tsuneki M. Deep learning models in medical image analysis. J Oral Biosci. 2022;64(3):312–20. doi:10.1016/j.job.2022.03.003. [Google Scholar] [PubMed] [CrossRef]

21. Khan ZA, Waqar M, Khan HU, Chaudhary NI, Khan AT, Ishtiaq I, et al. Fine-tuned deep transfer learning: an effective strategy for the accurate chronic kidney disease classification. PeerJ Comput Sci. 2025;11(2):e2800. doi:10.7717/peerj-cs.2800. [Google Scholar] [PubMed] [CrossRef]

22. Kumar N, Sharma M, Singh VP, Madan C, Mehandia S. An empirical study of handcrafted and dense feature extraction techniques for lung and colon cancer classification from histopathological images. Biomed Signal Process Control. 2022;75(1):103596. doi:10.1016/j.bspc.2022.103596. [Google Scholar] [CrossRef]

23. Masud M, Sikder N, Nahid AA, Bairagi AK, AlZain MA. A machine learning approach to diagnosing lung and colon cancer using a deep learning-based classification framework. Sensors. 2021;21(3):748. doi:10.3390/s21030748. [Google Scholar] [PubMed] [CrossRef]

24. Ali M, Ali R. Multi-input dual-stream capsule network for improved lung and colon cancer classification. Diagnostics. 2021;11(8):1485. doi:10.3390/diagnostics11081485. [Google Scholar] [PubMed] [CrossRef]

25. Hatuwal BK, Thapa HC. Lung cancer detection using convolutional neural network on histopathological images. Int J Comput Trends Technol. 2020;68(10):21–4. doi:10.14445/22312803/ijctt-v68i10p104. [Google Scholar] [CrossRef]

26. Mehmood S, Ghazal TM, Khan MA, Zubair M, Naseem MT, Faiz T, et al. Malignancy detection in lung and colon histopathology images using transfer learning with class selective image processing. IEEE Access. 2022;10:25657–68. doi:10.1109/ACCESS.2022.3150924. [Google Scholar] [PubMed] [CrossRef]

27. Mangal S, Chaurasia A, Khajanchi A. Convolution Neural Networks for diagnosing colon and lung cancer histopathological images. arXiv:2009.03878. 2020. [Google Scholar]

28. Civit-Masot J, Bañuls-Beaterio A, Domínguez-Morales M, Rivas-Pérez M, Muñoz-Saavedra L, Rodríguez Corral JM. Non-small cell lung cancer diagnosis aid with histopathological images using Explainable Deep Learning techniques. Comput Methods Programs Biomed. 2022;226(3):107108. doi:10.1016/j.cmpb.2022.107108. [Google Scholar] [PubMed] [CrossRef]

29. Mamun M, Farjana A, Al Mamun M, Ahammed MS. Lung cancer prediction model using ensemble learning techniques and a systematic review analysis. In: 2022 IEEE World AI IoT Congress (AIIoT); 2022 Jun 6–9; Seattle, WA, USA. p. 187–93. doi:10.1109/AIIoT54504.2022.9817326. [Google Scholar] [PubMed] [CrossRef]

30. Ramesh M, Maheswaran S, Theivanayaki S. Efficient lung cancer classification on multi level convolution neural network using histopathological images. In: 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT); 2023 Jul 6–8; Delhi, India. p. 1–7. doi:10.1109/ICCCNT56998.2023.10307852. [Google Scholar] [PubMed] [CrossRef]

31. Shanmugam K, Rajaguru H. Exploration and enhancement of classifiers in the detection of lung cancer from histopathological images. Diagnostics. 2023;13(20):3289. doi:10.3390/diagnostics13203289. [Google Scholar] [PubMed] [CrossRef]

32. Krishnan SD, Pelusi D, Daniel A, Suresh V, Balusamy B. Improved graph neural network-based green anaconda optimization for segmenting and classifying the lung cancer. Math Biosci Eng. 2023;20(9):17138–57. doi:10.3934/mbe.2023764. [Google Scholar] [PubMed] [CrossRef]

33. Phankokkruad M. Ensemble transfer learning for lung cancer detection. In: 2021 4th International Conference on Data Science and Information Technology; 2021 Jul 23–25; Shanghai China. p. 438–42. doi:10.1145/3478905.3478995. [Google Scholar] [CrossRef]

34. Pradhan M, Bhuiyan A, Mishra S. Histopathological lung cancer detection using enhanced grasshopper optimization algorithm with random forest. Int J Intell Eng Syst. 2022;15(6):11–20. doi:10.22266/ijies2022.1231.02. [Google Scholar] [CrossRef]

35. Setiawan W, Suhadi MM, Pramudita YD. Histopathology of lung cancer classification using convolutional neural network with gamma correction. Commun Math Biol Neurosci. 2022;2022:81. doi:10.28919/cmbn/7611. [Google Scholar] [CrossRef]

36. Tan M, Le Q. Efficientnetv2: smaller models and faster training. In: Proceedings of the International Conference on Machine Learning; 2021 Jul 18–24; Online. [Google Scholar]

37. Borkowski AA, Bui MM, Thomas LB, Wilson CP, DeLand LA, Mastorides SM. Lung and colon cancer histopathological image dataset (LC25000). arXiv:1912.12142. 2019. [Google Scholar]

38. Zullig LL, Jackson GL, Dorn RA, Provenzale DT, McNeil R, Thomas CM, et al. Cancer incidence among patients of the U.S. veterans affairs health care system. Mil Med. 2012;177(6):693–701. doi:10.7205/milmed-d-11-00434. [Google Scholar] [PubMed] [CrossRef]

39. Borkowski AA, Wilson CP, Borkowski SA, Thomas LB, Deland LA, Grewe SJ, et al. Comparing artificial intelligence platforms for histopathologic cancer diagnosis. Fed Pract. 2019;36(10):456–63. [Google Scholar] [PubMed]

40. Bloice MD, Roth PM, Holzinger A. Biomedical image augmentation using Augmentor. Bioinformatics. 2019;35(21):4522–4. doi:10.1093/bioinformatics/btz259. [Google Scholar] [PubMed] [CrossRef]

41. Hayati M, Muchtar K, Roslidar, Maulina N, Syamsuddin I, Elwirehardja GN, et al. Impact of CLAHE-based image enhancement for diabetic retinopathy classification through deep learning. Procedia Comput Sci. 2023;216(1):57–66. doi:10.1016/j.procs.2022.12.111. [Google Scholar] [CrossRef]

42. Angelov PP, Soares EA, Jiang R, Arnold NI, Atkinson PM. Explainable artificial intelligence: an analytical review. WIREs Data Min Knowl. 2021;11(5):e1424. doi:10.1002/widm.1424. [Google Scholar] [CrossRef]

43. Xu F, Uszkoreit H, Du Y, Fan W, Zhao D, Zhu J. Explainable AI: a brief survey on history, research areas, approaches and challenges. Cham, Switzerland: Springer International Publishing; 2019. p. 563–74. doi:10.1007/978-3-030-32236-6_51. [Google Scholar] [CrossRef]

44. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. VGrad-CAM: visual explanations from deep networks via gradient-based localization. In: 2017 IEEE International Conference on Computer Vision (ICCV); 2017 Oct 22–29; Venice, Italy. p. 618–26. doi:10.1109/ICCV.2017.74. [Google Scholar] [PubMed] [CrossRef]

45. Luo S, Ivison H, Han SC, Poon J. Local interpretations for explainable natural language processing: a survey. ACM Comput Surv. 2024;56(9):1–36. doi:10.1145/3649450. [Google Scholar] [CrossRef]

46. Chen J, Zhou D, Tang Y, Yang Z, Cao Y, Gu Q. Closing the generalization gap of adaptive gradient methods in training deep neural networks. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence; 2020 Jul 11–17; Yokohama, Japan. p. 3267–75. doi:10.24963/ijcai.2020/452. [Google Scholar] [CrossRef]

47. Sashank JR, Satyen K, Sanjiv K. On the convergence of Adam and beyond. In: Proceedings of the International Conference on Learning Representations; 2018 Apr 4–May 3; Vancouver, BC, Canada. [Google Scholar]

48. Anvari MA, Rahmati D, Kumar S. t-Distributed stochastic neighbor embedding. In: Dimensionality reduction in machine learning. San Francisco, CA, USA: Morgan Kaufmann; 2025. p. 187–207. [Google Scholar]

49. Abid A, Abdalla A, Abid A, Khan D, Alfozan A, Zou J. Gradio: hassle-free sharing and testing of ML models in the wild. arXiv:1906.02569. 2019. [Google Scholar]


Cite This Article

APA Style
Waqar, M., Khan, Z.A., Chang, A., Guo, Z., Lai, C. et al. (2026). Explainable Context-Aware Fusion Network for Non-Small Cell Lung Cancer Analysis with Application to Smart Healthcare Systems. Computer Modeling in Engineering & Sciences, 147(1), 41. https://doi.org/10.32604/cmes.2026.078892
Vancouver Style
Waqar M, Khan ZA, Chang A, Guo Z, Lai C, Chang C. Explainable Context-Aware Fusion Network for Non-Small Cell Lung Cancer Analysis with Application to Smart Healthcare Systems. Comput Model Eng Sci. 2026;147(1):41. https://doi.org/10.32604/cmes.2026.078892
IEEE Style
M. Waqar, Z. A. Khan, A. Chang, Z. Guo, C. Lai, and C. Chang, “Explainable Context-Aware Fusion Network for Non-Small Cell Lung Cancer Analysis with Application to Smart Healthcare Systems,” Comput. Model. Eng. Sci., vol. 147, no. 1, pp. 41, 2026. https://doi.org/10.32604/cmes.2026.078892


cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 232

    View

  • 59

    Download

  • 0

    Like

Share Link