Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (4)
  • Open Access

    ARTICLE

    AI-Driven SDN and Blockchain-Based Routing Framework for Scalable and Trustworthy AIoT Networks

    Mekhled Alharbi1,*, Khalid Haseeb2, Mamoona Humayun3,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.2, pp. 2601-2616, 2025, DOI:10.32604/cmes.2025.073039 - 26 November 2025

    Abstract Emerging technologies and the Internet of Things (IoT) are integrating for the growth and development of heterogeneous networks. These systems are providing real-time devices to end users to deliver dynamic services and improve human lives. Most existing approaches have been proposed to improve energy efficiency and ensure reliable routing; however, trustworthiness and network scalability remain significant research challenges. In this research work, we introduce an AI-enabled Software-Defined Network (SDN)- driven framework to provide secure communication, trusted behavior, and effective route maintenance. By considering multiple parameters in the forwarder selection process, the proposed framework enhances network More >

  • Open Access

    ARTICLE

    Interpretable Vulnerability Detection in LLMs: A BERT-Based Approach with SHAP Explanations

    Nouman Ahmad*, Changsheng Zhang

    CMC-Computers, Materials & Continua, Vol.85, No.2, pp. 3321-3334, 2025, DOI:10.32604/cmc.2025.067044 - 23 September 2025

    Abstract Source code vulnerabilities present significant security threats, necessitating effective detection techniques. Rigid rule-sets and pattern matching are the foundation of traditional static analysis tools, which drown developers in false positives and miss context-sensitive vulnerabilities. Large Language Models (LLMs) like BERT, in particular, are examples of artificial intelligence (AI) that exhibit promise but frequently lack transparency. In order to overcome the issues with model interpretability, this work suggests a BERT-based LLM strategy for vulnerability detection that incorporates Explainable AI (XAI) methods like SHAP and attention heatmaps. Furthermore, to ensure auditable and comprehensible choices, we present a… More >

  • Open Access

    ARTICLE

    CrossLinkNet: An Explainable and Trustworthy AI Framework for Whole-Slide Images Segmentation

    Peng Xiao1, Qi Zhong2, Jingxue Chen1, Dongyuan Wu1, Zhen Qin1, Erqiang Zhou1,*

    CMC-Computers, Materials & Continua, Vol.79, No.3, pp. 4703-4724, 2024, DOI:10.32604/cmc.2024.049791 - 20 June 2024

    Abstract In the intelligent medical diagnosis area, Artificial Intelligence (AI)’s trustworthiness, reliability, and interpretability are critical, especially in cancer diagnosis. Traditional neural networks, while excellent at processing natural images, often lack interpretability and adaptability when processing high-resolution digital pathological images. This limitation is particularly evident in pathological diagnosis, which is the gold standard of cancer diagnosis and relies on a pathologist’s careful examination and analysis of digital pathological slides to identify the features and progression of the disease. Therefore, the integration of interpretable AI into smart medical diagnosis is not only an inevitable technological trend but… More >

  • Open Access

    ARTICLE

    Adversarial Attack-Based Robustness Evaluation for Trustworthy AI

    Eungyu Lee, Yongsoo Lee, Taejin Lee*

    Computer Systems Science and Engineering, Vol.47, No.2, pp. 1919-1935, 2023, DOI:10.32604/csse.2023.039599 - 28 July 2023

    Abstract Artificial Intelligence (AI) technology has been extensively researched in various fields, including the field of malware detection. AI models must be trustworthy to introduce AI systems into critical decision-making and resource protection roles. The problem of robustness to adversarial attacks is a significant barrier to trustworthy AI. Although various adversarial attack and defense methods are actively being studied, there is a lack of research on robustness evaluation metrics that serve as standards for determining whether AI models are safe and reliable against adversarial attacks. An AI model’s robustness level cannot be evaluated by traditional evaluation… More >

Displaying 1-10 on page 1 of 4. Per Page