Special Issues
Table of Content

Multi-Modal Deep Learning for Advanced Medical Diagnostics

Submission Deadline: 31 July 2025 (closed) View: 5358 Submit to Journal

Guest Editors

Prof. Simon Fong

Email: ccfong@umac.mo

Affiliation: Department of Computer and Information Science, University of Macau, Macau, China

Homepage:

Research Interests:Data mining, Deep learning, AI for medical diagnosis 

图片2.png


Prof. Sabah Mohammed

Email:sabah.mohammed@lakeheadu.ca

Affiliation: Department of Computer Science, Lakehead University, Thunder Bay, Canada

Homepage:

Research Interests:Artificial Intelligence, Generative AI, Machine Learning, Graph-Based Data Analytics, Translational Medical Informatics

图片3.png


Prof. Tengyue Li

Email:yb97475@um.edu.mo

Affiliation: Faculty of Information, North China University of Technology, Beijing, China

Homepage:

Research Interests:Artificial Intelligence, Deep Learning, Multi-modal Algorithms, Tumor Analytics

图片4.png


Summary

The next frontier in medical diagnostics lies in the integration of multi-modal deep learning with big data analytics from diverse sources such as radiology images, histological data, and multi-omic transcriptomic datasets. This Special Issue aims to explore how these cutting-edge techniques can be used to enhance diagnostic accuracy, streamline data processing, and enable personalized medicine. The fusion of data from various medical formats presents significant opportunities for improving patient outcomes through comprehensive analysis and cross-referencing of different biological and clinical information. However, challenges such as data heterogeneity, computational complexity, and the need for robust cross-modal correlations must be addressed to unlock the full potential of multi-modal deep learning.


We invite submissions that focus on innovative solutions and novel frameworks in multi-modal data integration, advanced deep learning models, and their applications in real-world healthcare scenarios. The issue will cover topics such as the utilization of multi-omic data for precision medicine, the integration of radiology and histological data for accurate disease diagnosis, and AI-based models for personalized healthcare. By advancing the field of multi-modal deep learning, this Special Issue aims to shape the future of medical diagnostics and contribute to better clinical outcomes worldwide. 


The topics of interest for this special issue include, but are not limited to:

· Multi-modal data integration techniques

· Machine learning and deep learning for radiology and histological data integration

· AI applications in precision medicine and multi-omic data

· Handling data heterogeneity in medical diagnostics

· Big data management and computational approaches in healthcare

· Personalized healthcare through multi-modal machine and deep learning

· Benchmarking and evaluation of machine learning and deep learning models

· Innovations in multi-omic data analysis for patient outcomes

· Ethical considerations and data privacy in healthcare AI

· Advancements in AI, machine learning, and deep learning for healthcare

· Integration and visualization of heterogeneous medical data

· Emerging technologies in medical imaging and diagnostics

· Machine learning and deep learning architectures for medical data analysis

· Transfer learning and domain adaptation in medical imaging

· Explainable AI, automated feature extraction, and data augmentation

· Neural network optimization, robustness, and generalization

· Comparison of machine learning and deep learning with traditional methods

· Challenges in deploying machine learning and deep learning models in clinical settings


Keywords

Multi-modal deep learning in healthcare, Big data integration in medical diagnostics, Radiology and histology data fusion, Multi-omic data analytics for precision medicine, AI-driven personalized diagnostics, Cross-modal data correlation, Advanced healthcare data processing

Published Papers


  • Open Access

    ARTICLE

    Generated Preserved Adversarial Federated Learning for Enhanced Image Analysis (GPAF)

    Sanaa Lakrouni, Slimane Bah, Marouane Sebgui
    CMC-Computers, Materials & Continua, Vol.85, No.3, pp. 5555-5569, 2025, DOI:10.32604/cmc.2025.067654
    (This article belongs to the Special Issue: Multi-Modal Deep Learning for Advanced Medical Diagnostics)
    Abstract Federated Learning (FL) has recently emerged as a promising paradigm that enables medical institutions to collaboratively train robust models without centralizing sensitive patient information. Data collected from different institutions represent distinct source domains. Consequently, discrepancies in feature distributions can significantly hinder a model’s generalization to unseen domains. While domain generalization (DG) methods have been proposed to address this challenge, many may compromise data privacy in FL by requiring clients to transmit their local feature representations to the server. Furthermore, existing adversarial training methods, commonly used to align marginal feature distributions, fail to ensure the consistency… More >

  • Open Access

    REVIEW

    Deep Multi-Scale and Attention-Based Architectures for Semantic Segmentation in Biomedical Imaging

    Majid Harouni, Vishakha Goyal, Gabrielle Feldman, Sam Michael, Ty C. Voss
    CMC-Computers, Materials & Continua, Vol.85, No.1, pp. 331-366, 2025, DOI:10.32604/cmc.2025.067915
    (This article belongs to the Special Issue: Multi-Modal Deep Learning for Advanced Medical Diagnostics)
    Abstract Semantic segmentation plays a foundational role in biomedical image analysis, providing precise information about cellular, tissue, and organ structures in both biological and medical imaging modalities. Traditional approaches often fail in the face of challenges such as low contrast, morphological variability, and densely packed structures. Recent advancements in deep learning have transformed segmentation capabilities through the integration of fine-scale detail preservation, coarse-scale contextual modeling, and multi-scale feature fusion. This work provides a comprehensive analysis of state-of-the-art deep learning models, including U-Net variants, attention-based frameworks, and Transformer-integrated networks, highlighting innovations that improve accuracy, generalizability, and computational More >

  • Open Access

    REVIEW

    A Comprehensive Review of Multimodal Deep Learning for Enhanced Medical Diagnostics

    Aya M. Al-Zoghby, Ahmed Ismail Ebada, Aya S. Saleh, Mohammed Abdelhay, Wael A. Awad
    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 4155-4193, 2025, DOI:10.32604/cmc.2025.065571
    (This article belongs to the Special Issue: Multi-Modal Deep Learning for Advanced Medical Diagnostics)
    Abstract Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics, advancing precision medicine by enabling integration and learning from diverse data sources. The exponential growth of high-dimensional healthcare data, encompassing genomic, transcriptomic, and other omics profiles, as well as radiological imaging and histopathological slides, makes this approach increasingly important because, when examined separately, these data sources only offer a fragmented picture of intricate disease processes. Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling, more robust disease characterization, and improved treatment decision-making. This review… More >

  • Open Access

    ARTICLE

    Enhanced Cutaneous Melanoma Segmentation in Dermoscopic Images Using a Dual U-Net Framework with Multi-Path Convolution Block Attention Module and SE-Res-Conv

    Kun Lan, Feiyang Gao, Xiaoliang Jiang, Jianzhen Cheng, Simon Fong
    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 4805-4824, 2025, DOI:10.32604/cmc.2025.065864
    (This article belongs to the Special Issue: Multi-Modal Deep Learning for Advanced Medical Diagnostics)
    Abstract With the continuous development of artificial intelligence and machine learning techniques, there have been effective methods supporting the work of dermatologist in the field of skin cancer detection. However, object significant challenges have been presented in accurately segmenting melanomas in dermoscopic images due to the objects that could interfere human observations, such as bubbles and scales. To address these challenges, we propose a dual U-Net network framework for skin melanoma segmentation. In our proposed architecture, we introduce several innovative components that aim to enhance the performance and capabilities of the traditional U-Net. First, we establish… More >

Share Link