Open Access
ARTICLE
Hybrid Laplacian-DoG: Noise-Preserving 3D FDG-PET Contrast Enhancement for Improved MCI Detection
Department of Software Engineering, Kaunas University of Technology, Kaunas, Lithuania
* Corresponding Author: Ovidijus Grigas. Email:
(This article belongs to the Special Issue: Recent Advances in Signal Processing and Computer Vision)
Computer Modeling in Engineering & Sciences 2026, 147(1), 39 https://doi.org/10.32604/cmes.2026.077324
Received 07 December 2025; Accepted 23 March 2026; Issue published 27 April 2026
Abstract
Early detection of Mild Cognitive Impairment (MCI) with FDG-PET is essential for timely Alzheimer’s disease intervention. However, PET image quality is limited by low spatial resolution, partial volume effects, and Poisson noise. Standard enhancement methods, such as Bilateral filtering or Contrast Limited Adaptive Histogram Equalization (CLAHE), can increase contrast but often introduce heavy noise or distort image texture, while deep learning methods may produce hallucinated structures. We propose a fully data-adaptive, non-learned 3D enhancement framework whose output is deterministic for a given input volume, that combines Laplacian-based local contrast modulation with a gradient-gated Difference-of-Gaussians (DoG) detail injector. This hybrid design sharpens anatomical boundaries while keeping noise amplification near unity in uniform regions. The method enhances structure only where true radiotracer gradients are present. We evaluated the approach on a large Alzheimer’s Disease Neuroimaging Initiative (ADNI) cohort (Keywords
Alzheimer’s Disease (AD) remains a major challenge for global public health, and Mild Cognitive Impairment (MCI) constitutes a transitional stage in which therapeutic interventions are most likely to alter disease progression [1]. Although magnetic resonance imaging (MRI) captures structural changes, it often does so only after significant neuronal loss has occurred [2]. In contrast, 18F-fluorodeoxyglucose Positron Emission Tomography (8F-FDG-PET) provides a functional map of cerebral glucose metabolism, capable of identifying hypometabolic biomarkers before structural changes become visible [3]. However, the diagnostic value of FDG-PET is limited by the physical properties of the modality itself. PET images have a relatively low spatial resolution (about 4–6 mm), a low Signal-to-Noise Ratio (SNR), and are affected by Partial Volume Effects (PVE) [4]. As a result, signals from small cortical structures tend to mix with those of nearby tissue, which blurs important boundaries. For MCI, where metabolic changes are mild, blurring can hide early abnormalities and reduce diagnostic accuracy [5].
Enhancing PET images is challenging because increasing contrast often increases noise [6]. Traditional methods attempt to address this trade-off but still have notable limitations. In routine clinical workflows, Gaussian smoothing is applied to reduce Poisson noise, but it also increases partial-volume effects and blurs anatomical boundaries [7]. Iterative approaches such as Anisotropic Diffusion aim to smooth noise within regions while preserving edges, but can produce unrealistic “plastic” textures or staircase artifacts that negatively affect radiomic measures [8]. Wavelet-based denoising methods separate noise and signal across frequency bands, though they often introduce ringing around high-contrast metabolic regions [9]. Histogram-based methods like Contrast Limited Adaptive Histogram Equalization (CLAHE) can improve local contrast, but their intensity redistribution can distort quantitative uptake values (Standardized Uptake Values—SUV), introduce block-like artifacts, and saturate high-uptake areas, reducing the reliability of the scan [10].
In recent years, Deep Learning (DL) approaches such as Super-Resolution GANs (SRGANs) [11], CycleGANs [12], and U-Net–based models [13] have been widely adopted for PET enhancement. These methods can generate detailed textures and reduce noise, often outperforming classical techniques in perceptual quality metrics. However, several issues limit their clinical applicability. The most serious concern is “hallucination”, where GAN-based models produce realistic-looking but artificial anatomical structures that are not present in the patient’s scan [14]. In addition, these models require large paired datasets (e.g., low-dose/high-dose PET pairs), which are rarely available in real clinical settings [15]. Their performance also tends to drop when applied to data from different scanners, as small variations in reconstruction protocols or hardware can disrupt generalization [16]. Finally, the “black box” nature of deep neural networks raises concerns about interpretability and reliability. Clinicians often hesitate to trust the pixel-level modifications produced by models whose decision processes cannot be easily explained [17].
To address these limitations without relying on “black-box” or unstable deep learning models, we propose a data-driven 3D enhancement framework. The method combines a Laplacian operator for edge detection with a Difference-of-Gaussians (DoG) filter for band-pass detail extraction. A gradient-gating mechanism ensures that sharpening occurs only along true anatomical boundaries, while uniform regions and their native noise characteristics remain unchanged. This design produces a parameter-efficient algorithm that improves local metabolic contrast while keeping noise amplification close to unity (
The main contributions of this work are summarized as follows:
1. Hybrid 3D Enhancement Framework: We present a hybrid algorithm that merges Laplacian-based contrast modulation with gradient-gated DoG detail injection. This formulation effectively separates contrast enhancement from noise amplification, overcoming a key limitation of classical filtering.
2. Safety-Oriented Evaluation Metrics: We propose and apply radiomic safety metrics Noise Gain (NG) and Edge Preservation Index (EPI) to demonstrate that the proposed method avoids texture corruption and artifacts commonly introduced by Bilateral filtering.
3. Clinical Validation on MCI Detection: Using a large ADNI cohort (
Enhancement of FDG-PET image quality has been approached through both classical signal-processing pipelines and modern deep learning techniques. In the following, we summarize recent open-access studies that address the balance between resolution, noise reduction, and quantitative reliability in PET imaging.
Flaus et al. [18] introduced a deep learning framework to improve the visibility of focal epilepsy lesions using high-quality simulated PET phantoms as ground truth. A ResNet model was trained to map standard low-quality FDG-PET scans to sharpened, high-resolution output. The method improved quantitative metrics (Peak Signal-to-Noise Ratio–PSNR, Structural Similarity Index Measure–SSIM) and substantially increased the detection rate of subtle cortical hypometabolism (38% to 75%), with greater reader confidence.
Chaudhari et al. [19] developed a Convolutional Neural Network (CNN)-based low-count PET enhancement method for whole-body oncology imaging. Using a U-Net–style architecture, the model restored image quality in scans acquired at one-quarter of the standard dose. Blinded readers across multiple centers rated the enhanced scans as not inferior to full-dose reconstructions. Importantly, SUV measurements, lesion detectability, and diagnostic performance were preserved (sensitivity 0.94, specificity 0.98).
Song et al. [20] applied a Generative Adversarial Network for PET super-resolution. By incorporating adversarial loss, the GAN produced sharper cortical boundaries and more realistic uptake textures than conventional CNNs. Although perceptual improvements were notable, the authors emphasized the need for careful validation to avoid introducing hallucinated features.
Hashimoto et al. [21] explored the use of the deep image prior (DIP) for PET denoising. Instead of relying on external datasets, a convolutional generator was optimized directly on each subject’s dynamic PET series, using the high-count frame of each scan as implicit supervision. The method preserved temporal uptake patterns and outperformed Gaussian and guided filtering. A later 3D U-Net variant further improved denoising quality while avoiding excessive smoothing.
Jiang et al. [22] proposed TriPLET, an end-to-end multi-domain framework that processes PET data in the projection, frequency, and image domains. The pipeline couples a Transformer-based sinogram denoiser, a wavelet U-Net reconstructor, and a GAN discriminator to enforce multi-level consistency. TriPLET achieved state-of-the-art results on real low-dose PET data, recovering standard-dose image quality from quarter-dose inputs with improved SNR and structural fidelity.
Xue et al. [23] introduced LCPR-Net, a hybrid reconstruction–super-resolution model based on domain-transform CNNs and CycleGAN training. Operating directly on low-count sinograms, the network reconstructs full-count PET images with a cyclic consistency constraint that reduces the risk of hallucination. LCPR-Net achieved the highest PSNR/SSIM and the lowest error across baselines, while outperforming conventional iterative methods in both speed and image quality.
Yoshimura et al. [24] proposed a Residual Dense Network for super-resolution of half-duration FDG-PET scans. Trained on paired low-count and full-count data from 108 subjects, the model produced images with markedly improved contrast and clarity. Visual assessments confirmed that super-resolved PET images closely matched the quality of standard full-dose scans.
Chen et al. [25] presented Deep Progressive Learning (DPL), an AI-driven reconstruction algorithm integrated directly into the PET reconstruction pipeline. DPL improved image quality across all body mass index (BMI) groups compared with standard Ordered Subset Expectation Maximization (OSEM), resulting in better lesion visibility and increased diagnostic confidence. The method demonstrated consistent gains across patients’ body compositions.
To provide a structured overview of the research area, Table 1 summarizes the key characteristics of classical signal-processing methods and modern deep learning approaches for PET image enhancement. This comparison highlights the trade-offs that motivate the design of the proposed framework.

As shown in Table 1, classical methods offer transparency and quantitative stability but struggle to enhance fine structural detail without amplifying noise. Deep learning methods can achieve superior perceptual quality but introduce risks of hallucination, dataset dependency, and limited generalization. The proposed Hybrid Laplacian-DoG framework combines the interpretability and noise stability of classical approaches with targeted structural enhancement, without requiring training data or introducing learned artifacts.
The experimental evaluation was performed using ADNI FDG-PET data (available online: adni.loni.usc.edu). The aim of ADNI is to determine whether MRI, PET, biological markers, and cognitive assessments can jointly characterize the progression of MCI and early Alzheimer’s disease. FDG-PET scans were obtained from ADNI-1, ADNI-GO, and ADNI-2 phases, using ADNI’s baseline diagnostic labels.
In this study, we focused on FDG-PET, as cerebral glucose metabolism is a sensitive marker of neuronal dysfunction associated with cognitive decline. To support a balanced and unbiased assessment of the proposed enhancement method in downstream classification experiments, we constructed a dataset consisting of 964 Cognitively Normal (CN) and 964 MCI scans, yielding a total of
Subjects were selected according to their baseline diagnostic status, as defined by standard ADNI criteria. CN subjects showed no memory complaints, demonstrated normal memory performance adjusted for age and education, had a CDR score of 0, and showed no evidence of significant neurological or psychiatric disease. Subjects classified as MCI met ADNI criteria, including subjective memory concern, objective memory impairment on standardized neuropsychological tests, largely preserved activities of daily living, and absence of dementia. All selections were performed at the subject level to avoid data leakage.
Table 2 summarizes the demographic and clinical characteristics of the study cohort.

All scans underwent a standardized preprocessing pipeline consisting of the following sequential steps:
1. Brain extraction: Non-brain tissue was removed using SynthStrip [26], a learning-based skull-stripping tool that generalizes across modalities without requiring modality-specific retraining.
2. Spatial normalization: Each brain-extracted volume was registered to the Montreal Neurological Institute (MNI) 152 FDG-PET template using FSL FLIRT [27]. The resulting volumes were resampled to an isotropic voxel resolution of
3. Intensity masking: A binary brain mask M was generated by retaining voxels with strictly positive values exceeding a noise threshold defined as
No additional smoothing, intensity normalization (e.g., global mean scaling), or partial volume correction was applied prior to enhancement, ensuring that the evaluation reflects the method’s ability to operate on minimally processed clinical data.
3.1.1 Computational Environment
All experiments were conducted on a Linux-based workstation running Ubuntu 22.04, a single NVIDIA RTX 4090 GPU, paired with an AMD Ryzen 9 5900X CPU and 32 GB of system memory. The processing and evaluation pipeline was implemented in Python (v3.10), using NumPy and SciPy for numerical computation, NiBabel for neuroimaging data handling, and scikit-image for image processing operations. Brain extraction was performed using SynthStrip [26], and spatial normalization was carried out with FSL [27].
The proposed enhancement framework is fully deterministic and implemented using standard convolution and point-wise operations. Processing a single 3D FDG-PET volume requires approximately 0.6–1.0 s on CPU, depending on I/O overhead. Over 1000 repeated runs on a representative FDG-PET volume yielded an average runtime of
3.2 Proposed Hybrid Enhancement Framework
We propose a hybrid 3D enhancement framework that combines Laplacian-modulated contrast amplification with an edge-aware DoG detail injector. The approach operates sequentially, first enhancing the broad structural contrast, and then selectively adding high-frequency detail. Let
The proposed enhancement framework is intended to be used as a pre-processing step for automated image analysis pipelines rather than as a standalone reconstruction or diagnostic tool. Its design prioritizes structural fidelity and noise preservation to improve the reliability of downstream computational tasks, such as classification or feature extraction.
Fig. 1 illustrates the conceptual enhancement algorithm. For clarity, we summarize here the complete experimental workflow used in this study. Each FDG-PET volume undergoes: spatial normalization and brain masking; enhancement using the proposed hybrid method or comparison baselines; extraction of 2D mid-slices in sagittal, coronal, and axial planes; input preparation and augmentation for classification networks; and quantitative evaluation using reconstruction and classification metrics. Enhancement is applied independently to each 3D volume prior to any learning-based processing, and no information from the classification stage is used to tune enhancement parameters.

Figure 1: End-to-end processing pipeline used in this study. Raw FDG-PET volumes are spatially normalized and brain-masked, enhanced using the proposed hybrid or baseline methods, and then converted into 2D mid-slices for classification. Reconstruction quality metrics are computed directly on enhanced 3D volumes, while classification performance is evaluated on the extracted slices.
For clarity, all weighting terms are defined voxel-wise and indexed by spatial location
3.2.1 Stage 1: Laplacian-Modulated Contrast
Conventional contrast enhancement techniques, such as histogram equalization, often perform poorly in PET volumes because they indiscriminately amplify noise in low-uptake regions and oversaturate high-intensity metabolic hotspots [28]. In contrast, the Laplacian module functions as a selective amplifier: local gain
The first stage decomposes the input volume V into a base layer
The gain map relies on two weighting components: edge strength and signal intensity. First, we compute the Laplacian of Gaussian (LoG) response to detect structural boundaries:
where
where
Second, to avoid amplifying noise in low-uptake regions and to prevent saturation in high-intensity metabolic areas, we introduce an intensity weighting term
The final Laplacian-enhanced volume
where
3.2.2 Stage 2: Edge-Aware DoG Injection
Although Stage 1 improves local contrast, PET volumes still lack fine structural definition due to their inherently low spatial resolution. Stage 2 compensates for this by injecting high-frequency detail extracted using a DoG filter. A key component of this hybrid design is the gradient-gate term
The second stage introduces targeted band-pass detail in
Adding directly
The gradient suppression mask
The gradient-gating function is designed to act as a soft anatomical edge detector rather than as an explicit model of PET noise statistics. In FDG-PET, Poisson noise is approximately signal-dependent and largely uncorrelated at the voxel level, whereas true anatomical boundaries manifest themselves as coherent gradients that persist after light smoothing. The exponential form of
The final hybrid volume
Finally,
The proposed framework includes a small set of scale and weighting parameters chosen to align with the spatial resolution and intensity characteristics of FDG-PET rather than to optimize any specific downstream metric. The Gaussian scales were selected to reflect physiologically plausible structural ranges:
The modulation parameters
Percentile-based quantities such as
These percentile anchors are deterministic functions of the input volume: given a specific scan, the same percentiles and enhancement are always obtained. As such, they function not as tunable hyperparameters but as stable normalization references tied directly to each subject’s empirical intensity distribution. Because their role is normalization rather than optimization, they are not subject to sensitivity tuning in the same sense as modulation parameters (e.g.,
For all comparison methods, parameters were fixed across the entire dataset and were not tuned on a per-image, per-subject, or task-specific basis. Parameter values were selected based on commonly used settings reported in the literature and preliminary visual inspection to ensure stable behavior on FDG-PET volumes, rather than to optimize any quantitative metric. No dataset-level optimization or label-driven tuning was performed for any baseline method.
Specifically, unsharp masking was implemented using a Gaussian blur with
We compare the proposed method with three established enhancement techniques. In addition, because each component of the framework can be applied independently, evaluating the Laplacian-only, DoG-only, and full Hybrid variants provides an explicit ablation study of the method’s constituent stages. This allows us to isolate the contribution of the structural Laplacian backbone, the DoG-based detail injection, and their combined effect.
Traditional linear unsharp masking enhances the high-frequency content by subtracting a blurred version of the volume from the original. For an input volume V and a Gaussian kernel
where the scaling factor
3.3.2 Slice-Wise Bilateral Filter
The bilateral filter weights neighboring voxels by both spatial proximity and intensity similarity. Because PET volumes often exhibit anisotropic voxel spacing, the filter is applied slice-wise. The filtered output
where
CLAHE [29] enhances local contrast by computing histograms on small grid tiles (typically
The excess probability mass, given by
3.4 Classification Experimental Setup
To validate the practical utility of the proposed enhancement, we utilized a diverse suite of modern image classification architectures, ranging from Convolutional Neural Networks (CNNs) to Vision Transformers (ViTs) and hybrid models. The specific models evaluated include:
• Transformers & Hybrids: Vision Transformer (ViT) [30], SwiftFormer [31].
• Modern CNNs: ConvNextV2 [32], EfficientNetV2 [33], GhostNetV3 [34], MobileNetV4 [35], RegNet [36].
• State Space Models: MambaOut [37].
3.4.1 Data Partitioning and Input Representation
The dataset was divided into training and validation sets using an 85/15 subject-level split to ensure that no subject contributed scans to both sets, thus preventing data leakage. For classification, inputs were constructed from 2D mid-slices extracted from both the raw and enhanced FDG-PET volumes. Performance was independently evaluated in all three anatomical planes (sagittal, coronal, and axial) to assess plane-specific sensitivity and robustness.
3.4.2 Training Configuration and Augmentation
All classifiers were initialized with pre-trained weights and fine-tuned using AdamW with an initial learning rate of
3.5 Quantitative Evaluation Metrics
The proposed method is quantitatively evaluated using two sets of metrics: reconstruction quality and classification.
3.5.1 Reconstruction Quality Metrics
Peak Signal-to-Noise Ratio (PSNR)
PSNR measures the ratio of the signal’s maximum possible power to the power of corrupting noise/distortion.
where
Structural Similarity Index Measure (SSIM)
SSIM evaluates the perceived change in structural information. For two local windows
where
Contrast-to-Noise Ratio (CNR)
Calculated between high-uptake (
Edge Preservation Index (EPI)
To assess how effectively the enhancement preserves true anatomical boundaries without introducing an artificial structure, we define the EPI. The metric is computed as the Pearson correlation between the gradient magnitudes of the raw volume (
Let the magnitude of the gradient be
Here,
Noise Gain (NG).
We define noise gain as an image-domain proxy that quantifies how much additional high-frequency variance an enhancement method injects into tissue that appears homogeneous in the raw scan. The metric is therefore intended for relative comparison of enhancement methods under identical acquisition conditions, rather than as an absolute physiological noise measurement.
We first compute the magnitude of the gradient of the raw volume
and use it to identify voxels that are locally homogeneous in the raw image. As an operational approximation to “flat tissue”, we construct a mask
This mask is computed once from
The High-frequency residuals R are then computed by subtracting a smoothed version of the volume (which is obtained via Gaussian filtering with
We denote by
Values near
For structure-preserving enhancement, an NG close to unity is desirable, as it indicates that contrast is increased without altering the underlying noise statistics. Although noise reduction may be preferred in low-dose PET reconstruction, the present work focuses on enhancement rather than denoising; therefore, the target behavior is noise preservation rather than noise suppression.
4.1 Quantitative Reconstruction Quality
We first assess the proposed Hybrid enhancement using standard image quality metrics and compare it with established baseline methods. Table 3 reports the mean

The analysis highlights a common trade-off in existing enhancement techniques between increased contrast and loss of signal fidelity. The Bilateral filter achieves the highest apparent contrast gain (
In contrast, the proposed Hybrid method exhibits a more balanced performance profile. It achieves a CNR of
4.1.1 Analysis of Enhancement Efficiency
To further characterize the relationship between sharpening strength and artifact formation, we examined noise behavior as a function of contrast gain. Fig. 2 summarizes the Noise Gain values for all methods. Both Bilateral filtering and CLAHE substantially increase the noise floor (

Figure 2: Noise gain comparison. The dashed line represents the noise level of the raw input. The Hybrid method introduces negligible noise amplification compared to Bilateral or CLAHE.
Fig. 3 illustrates the Enhancement Efficiency by plotting the change in contrast (

Figure 3: Enhancement efficiency:
CLAHE and Unsharp Masking fall in the negative
Bilateral filtering provides the strongest contrast gain (
DoG yields moderate contrast improvements, but exhibits reduced edge preservation (
Laplacian filtering preserves edges more reliably (high EPI, minimal noise), but contributes little to contrast enhancement (
The Hybrid method balances these properties. It achieves the positive contrast gain of DoG while shifting rightward in edge preservation (
Fig. 4 shows the relationship between CNR and NG. An ideal enhancement method would appear in the upper-left region of the plot (high CNR with minimal noise amplification).

Figure 4: The y-axis denotes contrast (higher is better), while the x-axis reflects Noise Gain (lower is better). Bubble size encodes the magnitude of Noise Gain, and color represents Edge Preservation (yellow indicating high fidelity, purple indicating reduced fidelity). The Hybrid method (HYB) increases contrast vertically while remaining on the left side of the plot, indicating minimal noise amplification.
The Bilateral filter achieves the highest CNR (
Unsharp Masking shows stable edge preservation (light color) but produces limited contrast improvement, remaining near the lower end of the CNR axis (<5.15), suggesting that, while structurally safe, it does not significantly enhance metabolic contrast relative to the raw input.
The most informative comparison involves the Laplacian (LAP), DoG, and Hybrid (HYB) methods. Laplacian filtering offers the most stable noise behavior (
DoG increases CNR but shifts rightward (
The Hybrid method effectively balances these factors. It achieves a CNR comparable to DoG (
4.1.2 Sensitivity Analysis of Modulation Parameters
To assess the robustness of the proposed enhancement framework with respect to its modulation parameters, we conducted a lightweight sensitivity analysis focusing on the three parameters explicitly controlling enhancement strength: the Laplacian contrast gain

We evaluated combinations of
Across all tested configurations, the proposed method exhibited stable behavior. Contrast-to-noise improvements remained consistently positive (
These results confirm that the method is not sensitive to precise tuning of
4.2 Qualitative Visual Assessment
Visual inspection of the enhanced volumes (Fig. 5) supports the quantitative findings and highlights characteristic behaviors of each method. The red arrows mark the regions where these differences are particularly evident.

Figure 5: Montage comparison on axial, coronal, and sagittal planes. Note the clearer definition of the cortical ribbon in the Hybrid method compared to Raw.
In both the cortical ribbon (axial view) and the cerebellum (sagittal view), the Hybrid method improves structural definition by tightening the metabolic boundaries between gray and white matter. The enhancement recovers details lost to partial volume effects while preserving the underlying texture, resulting in a more coherent anatomical appearance than the Raw input.
CLAHE and Bilateral filtering show visible saturation in high-uptake regions (coronal view). Bilateral filtering produces a smoothed, “plastic” texture in which local variations are flattened, whereas CLAHE introduces block-like high-intensity artifacts that obscure subtle metabolic gradients. Both behaviors are consistent with their elevated Noise Gain and reduced SSIM.
The DoG method yields the sharpest apparent edges, but the enhancement is excessively strong. The resulting boundaries appear etched and exaggerated relative to the true tracer distribution. This aligns with its lower Edge Preservation Index (
Therefore, the visual assessments indicate that the proposed Hybrid method combines the structural fidelity of the Laplacian with the contrast enhancement capability of DoG. The resulting images preserve the natural appearance of the Raw scan, avoiding the artificial texture introduced by Bilateral filtering or CLAHE, while providing sufficient contrast improvement to reveal subtle metabolic features.
4.2.1 Structural Decomposition Analysis
Difference mapping provides additional information on how each enhancement method modifies the underlying signal by visualizing where the intensity has been added or removed (see Fig. 6).

Figure 6: Intensity difference (
CLAHE produces large low-frequency shifts across wide regions of the brain, visible as broad red and blue areas. The global intensity changes suggest that the method alters the radiotracer distribution rather than improving existing structure, raising concerns about potential errors in quantitative SUV measurements.
The Bilateral filter generates a mottled pattern of localized intensity changes, indicating that it modifies fine-scale textures in a way that disrupts the natural appearance of the scan, which is consistent with the “plastic” visual effect observed in the qualitative analysis.
The Hybrid method reveals localized, structurally significant changes, predominantly confined to the cortical ribbon. It avoids the widespread background shifts of CLAHE and the texture perturbations of Bilateral filtering, altering intensity only where necessary to recover anatomical boundaries.
The edge-magnitude difference maps (Fig. 7) further illustrate how each method alters structural boundaries.

Figure 7: Edge magnitude difference (
The DoG method (row 3) produces widespread red contours throughout the volume, indicating broad gradient amplification. This uniform strengthening of edges thickens boundaries indiscriminately and can cause adjacent metabolic regions to merge, thereby exacerbating partial-volume effects.
The Hybrid method (row 2) produces thinner, more anatomically consistent edge enhancements. Sharpening is concentrated along true cortical boundaries (gyri and sulci) and suppressed in adjacent white matter regions. The behavior confirms that the gradient-weighting term
The residual noise analysis (Fig. 8) provides a detailed view of how each algorithm alters the image’s high-frequency content. The rightmost column,

Figure 8: Noise residual analysis (
The
DoG and Unsharp Masking display speckled noise patterns that extend into homogeneous tissue, which confirms that linear sharpening approaches amplify background noise and structure equally when no gating mechanism is applied, resulting in a reduced signal-to-noise ratio in regions such as white matter.
The Laplacian baseline exhibits minimal changes in its
The Hybrid method displays sparse yet anatomically aligned residuals. Deviations occur primarily along cortical boundaries rather than in homogeneous tissue, distinguishing it from the speckled patterns of DoG or the patch-like artifacts of Bilateral filtering. This pattern confirms the intended behavior of the gradient-gating mechanism: noise remains largely unchanged in flat regions, and detail is injected only at meaningful anatomical transitions, effectively restoring structure lost to partial-volume effects without introducing artificial texture.
4.3 MCI Classification Performance
To evaluate the diagnostic performance of classifiers trained on enhanced PET images, we adopt standard binary classification metrics derived from the confusion matrix: Accuracy (ACC), Sensitivity (SEN), Specificity (SPE), Matthews Correlation Coefficient (MCC), Area Under Curve (AUC), Cohen’s Kappa (CK), and F1 Score. To quantify variability, all models were trained with multiple random seeds, and results are reported as mean
All classification models were trained using identical data splits, optimization settings, and early-stopping criteria to ensure a fair and controlled comparison. Across all architectures and anatomical planes, training and validation loss curves showed stable convergence, with no signs of divergence or overfitting. For illustration, Fig. 9 presents the training and validation loss curves for MCI classification using the enhanced PET volumes. Training curves for the raw-input setting exhibited the same qualitative convergence behavior and are therefore omitted for brevity.

Figure 9: Training and validation loss curves for MCI classification with enhanced PET data. Top row: training loss for (a) sagittal, (b) coronal, and (c) axial planes. Bottom row: corresponding validation loss for (d) sagittal, (e) coronal, and (f) axial planes.
Accuracy, sensitivity, specificity, and AUC trends were consistent across classes and models, supporting the robustness of the reported performance gains.
The proposed Hybrid enhancement yields consistent improvements in MCI classification across all architectures and anatomical planes evaluated (Fig. 10). Among the three views, the axial plane provided the strongest diagnostic performance, both before and after enhancement.

Figure 10: Impact of enhancement on classification accuracy across anatomical planes. The proposed method (green) consistently outperforms the raw baseline (grey) across all tested architectures. The + symbol denotes the relative change with respect to the corresponding baseline (unenhanced) input. Exact numerical values corresponding to each bar are reported in Tables 5 and 6.
In the axial plane, MobileNetV4 achieved the highest overall performance when trained on enhanced images, reaching a mean accuracy of
For MambaOut in the axial plane, mean sensitivity improved from
The improvement extends beyond accuracy. The radar plot in Fig. 11 shows that enhancement broadens the performance envelope across all evaluation metrics for MobileNetV4. Sensitivity, which is critical for early detection, showed notable gains. Although multiple architectures were evaluated, MobileNetV4 is shown as a representative case in Fig. 11 because it consistently achieved the strongest and most stable performance gains across enhancement settings and anatomical planes. Similar trends were observed for other models and are reported in Tables 5 and 6 and Fig. 10.

Figure 11: Holistic performance comparison for the best model (MobileNetV4, Axial). MobileNetV4 achieved the most consistent and highest overall gains across architectures and anatomical planes; therefore, it is used here as a representative example to summarize the effect of the proposed enhancement across evaluation metrics. Exact numerical values are reported in Tables 5 and 6.


All classification metrics for all models are listed in Tables 5 and 6.
The results on raw data reveal clear differences across anatomical planes and model families. The axial plane consistently provides the strongest diagnostic performance, with EfficientNetV2 and ConvNextV2 achieving mean accuracies of
Applying the proposed Hybrid enhancement produces consistent improvements across all metrics, architectures, and planes. The most visible gains appear in sensitivity and MCC, both of which indicate that classifiers detect a greater proportion of true MCI cases and make more reliable decisions. In the sagittal plane, ViT accuracy increases from
When comparing the two tables, a consistent pattern emerges: Hybrid enhancement narrows the performance gap between architectures and stabilizes metric variability across planes. In raw data, mean accuracy ranges span up to 7 percentage points for some planes, whereas enhanced data reduces these spreads, suggesting a more uniform and separable feature space. Sensitivity gains are particularly relevant from a clinical perspective, with MambaOut (Axial) improving from
The results of this study show that the proposed Hybrid Laplacian-DoG framework mitigates the longstanding trade-off between contrast enhancement and noise amplification in FDG-PET neuroimaging. By combining a structure-preserving Laplacian base with a gradient-gated DoG detail injector, the method produces a balanced enhancement profile (
A key finding is that widely used enhancement techniques are poorly suited to the statistical properties of PET data. As illustrated in Fig. 4, Bilateral filtering and CLAHE implicitly assume that local smoothness or histogram redistribution will reduce noise. However, cortical FDG uptake is inherently textural due to underlying neurophysiology and Poisson acquisition noise. The enforcement of smoothness (Bilateral) or alteration of the global intensity distribution (CLAHE) disrupts these subtle patterns, leading to elevated Noise Gain values (>1.25) and degraded structural fidelity.
The Hybrid framework performs well precisely because it aligns with the modality’s physics. The Laplacian stage enhances the underlying anatomical structure without modifying the noise floor. Subsequent DoG injection is regulated by the gradient-based gating term
From a clinical perspective, an enhancement method must not introduce artifacts that resemble pathology. Noise residual analysis (Fig. 8) was essential to evaluate this aspect. The Hybrid method produced sparse, anatomically aligned residuals, whereas Unsharp Masking exhibited diffuse speckling and Bilateral filtering showed clustered intensity shifts, which indicates artificial texture generation. The structural coherence of the Hybrid residuals underscores its safety and reliability for diagnostic workflows.
The favorable balance between noise and structure likely underlies the performance gains observed for MobileNetV4 and MambaOut. Deep learning classifiers are sensitive to high-frequency corruption. The Hybrid method provides cleaner gradients and more coherent boundaries, yielding a more separable feature space. The reduction in misclassification error on the axial plane illustrates the practical diagnostic benefit of this approach.
The consistent superiority of the axial plane is also notable. The axial slices align with the native acquisition geometry of the PET scanners, minimizing interpolation artifacts. In addition, early hypometabolism associated with Alzheimer’s disease and MCI is most prominent along the cortical ribbon in axial views. The Hybrid method is particularly effective in enhancing these subtle patterns, contributing to the observed 96% mean accuracy in axial classification.
It is important to note that EPI quantifies the spatial correspondence of edges between the raw and enhanced images rather than the magnitude of contrast amplification. A high EPI value indicates that the enhancement preserves the location and orientation of the anatomical boundaries; it does not imply that no enhancement has occurred. Instead, it confirms that the method does not introduce artificial edges or alter the geometric structure of the tracer distribution. The contrast enhancement primarily modifies the strength of the gradients, while the EPI evaluates the pattern of those gradients. Consequently, methods such as Laplacian filtering can achieve EPI scores near
Although a dedicated ablation table is not provided, the experimental design serves as an implicit ablation study, as each module of the framework is evaluated independently. The results for the Laplacian-only variant, the DoG-only variant, and the full Hybrid method isolate the contributions of edge-preserving contrast modulation, high-frequency detail extraction, and their combined effect. The behavior of the gradient-gating term is evident from the contrast–noise relationship: the DoG baseline increases noise and reduces edge fidelity, whereas the Hybrid method restores contrast while keeping Noise Gain near unity. Similarly, intensity-weighting suppresses enhancement in low-uptake regions, preventing noise amplification in homogeneous tissue.
Despite these promising results, several limitations remain. While we did not perform an exhaustive sensitivity analysis over the full combinatorial space of all parameters, we conducted a targeted sensitivity study on the primary modulation parameters that directly control enhancement strength, namely the Laplacian gain
The clinical relevance of the proposed enhancement was evaluated using 2D mid-slices extracted from enhanced FDG-PET volumes, while the enhancement itself operates fully in 3D. This evaluation protocol was intentionally adopted to isolate the effect of volumetric enhancement on diagnostic signal quality, independently of confounding factors introduced by 3D classifier capacity and architectural variability. Importantly, 2D slice-based classification remains a common and accepted practice in FDG-PET studies of Alzheimer’s disease and MCI, with numerous prior works reporting clinically meaningful results under the same setting [39–43]. All classifier inputs in this study are derived from fully enhanced 3D volumes. Therefore, the observed performance gains directly reflect improvements in the underlying volumetric signal rather than slice-wise processing artifacts. While end-to-end 3D classification may further exploit spatial context, it represents a complementary research direction and is left for future work.
A further limitation relates to the interpretation of the NG metric. NG provides an image-domain estimate of high-frequency variance within regions that appear homogeneous in the raw volume and is therefore intended for relative comparison of enhancement methods under identical acquisition settings, rather than for absolute characterization of scanner noise. The percentile-based definition of “flat tissue” (voxels below the 30th percentile of the raw gradient magnitude) serves as a practical surrogate commonly used in PET image-processing studies when physical noise measurements are unavailable. In FDG-PET, this threshold reliably identifies the ventricular CSF and the deep white matter, which are physiologically low-variance regions. Nevertheless, NG remains an indirect proxy for noise behavior, and future work should incorporate phantom experiments or scanner-native noise descriptors to provide a more direct and physics-grounded assessment of noise amplification.
Regarding quantitative PET measures such as SUV, we emphasize that the proposed framework is designed as a structure-preserving enhancement rather than a quantitative correction method. Global intensity scaling is explicitly preserved by median normalization, and NG values close to unity indicate that enhancement does not alter the noise floor in homogeneous tissue. However, because the method locally modulates gradient strength to restore anatomical boundaries, small voxel-level changes in SUV values near edges may occur. Importantly, the present work does not claim strict voxel-wise SUV invariance, particularly in regions affected by partial-volume effects. Instead, the method aims to improve anatomical fidelity and discriminative signal quality while maintaining global quantitative consistency. A dedicated statistical evaluation of SUV stability, including region-wise analysis and phantom validation, is an important direction for future work.
It is also important to clarify that although several components of the framework rely on data-derived percentiles (e.g., for edge normalization or intensity weighting), these operations do not introduce stochasticity. All percentile computations are deterministic functions of the input volume and produce identical results for identical inputs. Their role is to adapt the enhancement strength to the subject’s intensity distribution, analogously to widely used histogram-based normalization procedures. Thus, the framework remains fully deterministic in practice, despite employing data-dependent weighting terms.
A further limitation of this study is the absence of reconstruction-level methods in the comparison. Techniques such as point-spread-function (PSF) modeling, advanced OSEM variants with regularization, highly constrained back-projection (HYPR) based denoising, and proprietary deep-learning reconstructions (e.g., Siemens AI.PET) operate directly on sinogram data or incorporate scanner-specific system matrices. These approaches differ fundamentally in scope and are not directly comparable to our framework, which is a post-processing enhancement method applied to fully reconstructed standard-dose FDG-PET volumes. As such, the present evaluation focuses exclusively on image-domain techniques that can be deployed independently of scanner hardware or vendor-specific reconstruction pipelines.
No formal qualitative reader study involving nuclear medicine physicians was conducted in this work. The evaluation was based on objective, reproducible quantitative metrics and standardized visual comparisons to avoid subjective bias. Expert reader assessment represents an important direction for future validation.
Furthermore, although ADNI provides high-quality standardized data, real-world clinical scans vary widely in dose, reconstruction kernels, and acquisition protocols. Evaluating robustness across multi-site, heterogeneous datasets will be an essential next step.
This work introduced a 3D enhancement framework designed specifically for the characteristics of FDG-PET neuroimaging. By combining the structural stability of a Laplacian operator with the contrast enhancement provided by a gradient-gated Difference-of-Gaussians, the proposed Hybrid method effectively resolves the conventional trade-off between image sharpness and noise amplification.
The evaluation confirms that the method offers a practical and reliable enhancement strategy. Quantitatively, it is the only approach tested that achieves significant contrast improvement (
The observed improvements in the detection of mild cognitive impairment further highlight its potential in clinical relevance. Without modifying network architectures, hybrid enhancement reduced the mean misclassification rate from approximately 7% to 4%, corresponding to a roughly 40% reduction in errors. These findings suggest that a substantial part of the challenge in automated MCI detection stems from limitations in the input signal rather than the model’s capacity. Enhancing PET data in a structurally faithful and noise-stable manner may therefore be a critical step toward more accurate and reliable computer-aided diagnosis.
Acknowledgement: None.
Funding Statement: The authors received no specific funding for this study.
Author Contributions: Conceptualization, Ovidijus Grigas; Data curation, Rytis Maskeliūnas; Formal analysis, Ovidijus Grigas and Rytis Maskeliūnas; Funding acquisition, Rytis Maskeliūnas; Investigation, Ovidijus Grigas and Rytis Maskeliūnas; Methodology, Ovidijus Grigas; Project administration, Rytis Maskeliūnas; Resources, Ovidijus Grigas; Software, Ovidijus Grigas; Supervision, Rytis Maskeliūnas; Validation, Ovidijus Grigas; Visualization, Ovidijus Grigas; Writing—original draft, Ovidijus Grigas; Writing—review & editing, Ovidijus Grigas and Rytis Maskeliūnas. All authors reviewed and approved the final version of the manuscript.
Availability of Data and Materials: Dataset Alzheimer’s Disease Neuroimaging Initiative (ADNI) used in this study can be accessed upon request through IDA system https://ida.loni.usc.edu (accessed on 22 November 2025).
Ethics Approval: Not applicable.
Conflicts of Interest: The authors declare no conflicts of interest.
References
1. Anderson ND. State of the science on mild cognitive impairment (MCI). CNS Spectr. 2019;24(1):78–87. doi:10.1017/s1092852918001347. [Google Scholar] [PubMed] [CrossRef]
2. Pérot JB, Niewiadomska-Cimicka A, Brouillet E, Trottier Y, Flament J. Longitudinal MRI and 1H-MRS study of SCA7 mouse forebrain reveals progressive multiregional atrophy and early brain metabolite changes indicating early neuronal and glial dysfunction. PLoS One. 2024;19(1):e0296790. doi:10.1371/journal.pone.0296790. [Google Scholar] [PubMed] [CrossRef]
3. Mosconi L. Brain glucose metabolism in the early and specific diagnosis of Alzheimer’s disease: FDG-PET studies in MCI and AD. Eur J Nucl Med Mol Imaging. 2005;32(4):486–510. doi:10.1007/s00259-005-1762-7. [Google Scholar] [PubMed] [CrossRef]
4. Ibaraki M, Matsubara K, Shinohara Y, Shidahara M, Sato K, Yamamoto H, et al. Brain partial volume correction with point spreading function reconstruction in high-resolution digital PET: comparison with an MR-based method in FDG imaging. Ann Nucl Med. 2022;36(8):717–27. doi:10.1007/s12149-022-01753-5. [Google Scholar] [PubMed] [CrossRef]
5. Vigneron V, Kodewitz A, Tome AM, Lelandais S, Lang E. Alzheimer’s disease brain areas: the machine learning support for blind localization. Curr Alzheimer Res. 2016;13(5):498–508. doi:10.2174/1567205013666160314144822. [Google Scholar] [PubMed] [CrossRef]
6. Rahmim A, Tang J. Noise propagation in resolution modeled PET imaging and its impact on detectability. Phys Med Biol. 2013;58(19):6945. doi:10.1088/0031-9155/58/19/6945. [Google Scholar] [PubMed] [CrossRef]
7. Harada K, Ohashi Y, Chiba A, Numasawa K, Imai T, Hayasaka S, et al. Development of new digital phantom creation tool for evaluation of low-contrast detectability using iterative reconstruction. Nippon Hoshasen Gijutsu Gakkai Zasshi. 2018;74(8):769–78. doi:10.6009/jjrt.2018_JSRT_74.8.769. [Google Scholar] [PubMed] [CrossRef]
8. Mittal D, Kumar V, Saxena SC, Khandelwal N, Kalra N. Enhancement of the ultrasound images by modified anisotropic diffusion method. Med Biol Eng Comput. 2010;48(12):1281–91. doi:10.1007/s11517-010-0650-x. [Google Scholar] [PubMed] [CrossRef]
9. Vigneshwaran B, Maheswari RV, Subburaj P. An improved threshold estimation technique for partial discharge signal denoising using Wavelet Transform. In: Proceedings of the 2013 International Conference on Circuits, Power and Computing Technologies (ICCPCT); 2013 Mar 20–21; Nagercoil, India. p. 300–5. doi:10.1109/iccpct.2013.6528823. [Google Scholar] [CrossRef]
10. Kinahan PE, Fletcher JW. Positron emission tomography-computed tomography standardized uptake values in clinical practice and assessing response to therapy. Semin Ultrasound CT MRI. 2010;31(6):496–505. doi:10.1053/j.sult.2010.10.001. [Google Scholar] [PubMed] [CrossRef]
11. Ledig C, Theis L, Huszar F, Caballero J, Cunningham A, Acosta A, et al. Photo-realistic single image super-resolution using a generative adversarial network. arXiv:1609.04802v5. 2016. doi:10.48550/arXiv.1609.0480202. [Google Scholar] [CrossRef]
12. Zhu JY, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv:1703.10593v7. 2017. doi:10.48550/arXiv.1703.10593. [Google Scholar] [CrossRef]
13. Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. arXiv:1505.04597. 2015. doi: 10.48550/arXiv.1505.04597. [Google Scholar] [CrossRef]
14. Lei Y, Harms J, Wang T, Liu Y, Shu HK, Jani AB, et al. MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks. Med Phys. 2019;46(8):3565–81. doi:10.1002/mp.13617. [Google Scholar] [PubMed] [CrossRef]
15. Tang C, Li J, Wang L, Li Z, Jiang L, Cai A, et al. Unpaired low-dose CT denoising network based on cycle-consistent generative adversarial network with prior image information. Comput Math Meth Med. 2019;2019:8639825. doi:10.1155/2019/8639825. [Google Scholar] [PubMed] [CrossRef]
16. Di Feola F, Pompilio L, Assolito C, Guarrasi V, Soda P. Texture-aware StarGAN for CT data harmonization. In: Proceedings of the 2025 International Joint Conference on Neural Networks (IJCNN); 2025 Jun 30–Jul 5; Rome, Italy. p. 1–8. doi:10.1109/ijcnn64981.2025.11228038. [Google Scholar] [PubMed] [CrossRef]
17. Marey A, Arjmand P, Alerab ADS, Eslami MJ, Saad AM, Sanchez N, et al. Explainability, transparency and black box challenges of AI in radiology: impact on patient care in cardiovascular radiology. Egypt J Radiol Nucl Med. 2024;55(1):183. doi:10.1186/s43055-024-01356-2. [Google Scholar] [CrossRef]
18. Flaus A, Deddah T, Reilhac A, De Leiris N, Janier M, Merida I, et al. PET image enhancement using artificial intelligence for better characterization of epilepsy lesions. Front Med. 2022;9:1042706. doi:10.3389/fmed.2022.1042706. [Google Scholar] [PubMed] [CrossRef]
19. Chaudhari AS, Mittra E, Davidzon GA, Gulaka P, Gandhi H, Brown A, et al. Low-count whole-body PET with deep learning in a multicenter and externally validated study. npj Digit Med. 2021;4:127. doi:10.1038/s41746-021-00497-2. [Google Scholar] [PubMed] [CrossRef]
20. Song TA, Chowdhury SR, Yang F, Dutta J. PET image super-resolution using generative adversarial networks. Neural Netw. 2020;125:83–91. doi:10.1016/j.neunet.2020.01.029. [Google Scholar] [PubMed] [CrossRef]
21. Hashimoto F, Ohba H, Ote K, Teramoto A, Tsukada H. Dynamic PET image denoising using deep convolutional neural networks without prior training datasets. IEEE Access. 2019;7:96594–603. doi:10.1109/access.2019.2929230. [Google Scholar] [CrossRef]
22. Jiang C, Liu M, Sun K, Shen D. End-to-end triple-domain pet enhancement: a hybrid denoising-and-reconstruction framework for reconstructing standard-dose PET images from low-dose PET sinograms. arXiv:2412.03617. 2024. doi:10.48550/arXiv.2412.03617. [Google Scholar] [CrossRef]
23. Xue H, Zhang Q, Zou S, Zhang W, Zhou C, Tie C, et al. LCPR-Net: low-count PET image reconstruction using the domain transform and cycle-consistent generative adversarial networks. Quant Imaging Med Surg. 2020;11(2):749–62. doi:10.21037/qims-20-66. [Google Scholar] [PubMed] [CrossRef]
24. Yoshimura T, Hasegawa A, Kogame S, Magota K, Kimura R, Watanabe S, et al. Medical radiation exposure reduction in PET via super-resolution deep learning model. Diagnostics. 2022;12(4):872. doi:10.3390/diagnostics12040872. [Google Scholar] [PubMed] [CrossRef]
25. Chen Z, Yang H, Qi M, Chen W, Liu F, Song S, et al. Enhancing 18F-FDG PET image quality and lesion diagnostic performance across different body mass index using the deep progressive learning reconstruction algorithm. Cancer Imag. 2025;25(1):58. doi:10.1186/s40644-025-00877-x. [Google Scholar] [PubMed] [CrossRef]
26. Hoopes A, Mora JS, Dalca AV, Fischl B, Hoffmann M. SynthStrip: skull-stripping for any brain image. NeuroImage. 2022;260(1):119474. doi:10.1016/j.neuroimage.2022.119474. [Google Scholar] [PubMed] [CrossRef]
27. Jenkinson M, Beckmann CF, Behrens TEJ, Woolrich MW, Smith SM. FSL. NeuroImage. 2012;62(2):782–90. doi:10.1016/j.neuroimage.2011.09.015. [Google Scholar] [PubMed] [CrossRef]
28. Vinoothna B, Rajendiran N. Design and development of contrast-limited adaptive histogram equalization technique for enhancing pet images by improving joint entropy, UIQI parameters in comparison with median filtering. AIP Conf Proc. 2024;2816:090014. doi:10.1063/5.0185943. [Google Scholar] [CrossRef]
29. Zuiderveld K. Contrast limited adaptive histogram equalization. San Diego, CA, USA: Academic Press Professional, Inc.; 1994. p. 474–85. [Google Scholar]
30. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv:2010.11929. 2020. doi:10.48550/arXiv.2010.11929. [Google Scholar] [CrossRef]
31. Shaker A, Maaz M, Rasheed H, Khan S, Yang MH, Khan FS. SwiftFormer: efficient additive attention for transformer-based real-time mobile vision applications. arXiv:2303.15446. 2023. doi:10.48550/arXiv.2303.15446. [Google Scholar] [CrossRef]
32. Woo S, Debnath S, Hu R, Chen X, Liu Z, Kweon IS, et al. ConvNeXt V2: co-designing and scaling ConvNets with masked autoencoders. arXiv:2301.00808. 2023. doi:10.48550/arXiv.2301.00808. [Google Scholar] [CrossRef]
33. Tan M, Le QV. EfficientNetV2: smaller models and faster training. arXiv:2104.00298. 2021. doi:10.48550/arXiv.2104.00298. [Google Scholar] [CrossRef]
34. Liu Z, Hao Z, Han K, Tang Y, Wang Y. GhostNetV3: exploring the training strategies for compact models. arXiv:2404.11202. 2024. doi:10.48550/arXiv.2404.11202. [Google Scholar] [CrossRef]
35. Qin D, Leichner C, Delakis M, Fornoni M, Luo S, Yang F, et al. MobileNetV4—universal models for the mobile ecosystem. arXiv:2404.10518. 2024. doi:10.48550/arXiv.2404.10518. [Google Scholar] [CrossRef]
36. Xu J, Pan Y, Pan X, Hoi S, Yi Z, Xu Z. RegNet: self-regulated network for image classification. arXiv:2101.00590. 2021. doi:10.48550/arXiv.2101.00590. [Google Scholar] [CrossRef]
37. Yu W, Wang X. MambaOut: do we really need mamba for vision? arXiv:2405.07992. 2024. doi:10.48550/arXiv.2405.07992. [Google Scholar] [CrossRef]
38. Veraart J, Novikov DS, Christiaens D, Ades-aron B, Sijbers J, Fieremans E. Denoising of diffusion MRI using random matrix theory. NeuroImage. 2016;142(4):394–406. doi:10.1016/j.neuroimage.2016.08.016. [Google Scholar] [PubMed] [CrossRef]
39. Liu M, Cheng D, Yan W, Initiative ADN. Classification of Alzheimer’s disease by combination of convolutional and recurrent neural networks using FDG-PET images. Front Neuroinform. 2018;12:35. doi:10.3389/fninf.2018.00035. [Google Scholar] [PubMed] [CrossRef]
40. Zhang F, Li Z, Zhang B, Du H, Wang B, Zhang X. Multi-modal deep learning model for auxiliary diagnosis of Alzheimer’s disease. Neurocomputing. 2019;361(5):185–95. doi:10.1016/j.neucom.2019.04.093. [Google Scholar] [CrossRef]
41. Kim HW, Lee HE, Oh K, Lee S, Yun M, Yoo SK. Multi-slice representational learning of convolutional neural network for Alzheimer’s disease classification using positron emission tomography. Biomed Eng Online. 2020;19(1):70. doi:10.1186/s12938-020-00813-z. [Google Scholar] [PubMed] [CrossRef]
42. Pan X, Phan TL, Adel M, Fossati C, Gaidon T, Wojak J, et al. Multi-view separable pyramid network for AD prediction at MCI stage by 18F-FDG brain PET imaging. IEEE Trans Med Imaging. 2021;40(1):81–92. doi:10.1109/tmi.2020.3022591. [Google Scholar] [PubMed] [CrossRef]
43. Rehman A, Yi MK, Majeed A, Hwang SO. Early diagnosis of Alzheimer’s disease using 18F-FDG PET with soften latent representation. IEEE Access. 2024;12(11):87923–33. doi:10.1109/access.2024.3418508. [Google Scholar] [PubMed] [CrossRef]
Cite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools