Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (141)
  • Open Access

    ARTICLE

    Trustworthy Personalized Federated Recommender System with Blockchain-Assisted Decentralized Reward Management

    Waqar Ali1, May Altulyan2, Ghulam Farooque3, Siyuan Li4, Jie Shao4,5,*

    CMC-Computers, Materials & Continua, Vol.87, No.3, 2026, DOI:10.32604/cmc.2026.078599 - 09 April 2026

    Abstract Federated recommender systems (FedRS) enable collaborative model training while preserving user privacy, yet they remain vulnerable to adversarial attacks, unreliable client updates, and misaligned incentives in decentralized environments. Existing approaches struggle to jointly preserve personalization, robustness, and trust when user data are highly non-IID and recommendation quality is governed by ranking-oriented objectives. To address these challenges, we propose a Trustworthy Federated Recommender System (T-FedRS) that extends federated neural collaborative filtering by integrating a ranking-aware reputation mechanism and a lightweight blockchain layer for transparent incentive allocation. Personalization is preserved through locally maintained user embeddings, while item parameters… More >

  • Open Access

    ARTICLE

    EdgeTrustX: A Privacy-Aware Federated Transformer Framework for Scalable and Explainable IoT Threat Detection

    Saleh Alharbi*

    CMC-Computers, Materials & Continua, Vol.87, No.3, 2026, DOI:10.32604/cmc.2026.073584 - 09 April 2026

    Abstract Real-time threat detection in Internet of Things (IoT) networks requires scalable, privacy-preserving, and interpretable models capable of operating under strict latency constraints. This paper presents EdgeTrustX, a privacy-aware federated transformer framework that addresses these challenges by combining transformer-based representation learning with federated optimisation, differential privacy, and homomorphic encryption. The framework enables collaborative model training across heterogeneous IoT devices without exposing sensitive local data while maintaining computational feasibility for edge deployment. A multi-head attention mechanism integrated with a secure aggregation protocol supports adaptive feature weighting and privacy-protected parameter exchange. To enhance transparency, an explainability module that… More >

  • Open Access

    ARTICLE

    In-Mig: Geographically Dispersed Agentic LLMs for Privacy-Preserving Artificial Intelligence

    Mohammad Nauman*

    CMC-Computers, Materials & Continua, Vol.87, No.2, 2026, DOI:10.32604/cmc.2026.077259 - 12 March 2026

    Abstract Large Language Models (LLMs) are increasingly utilized for semantic understanding and reasoning, yet their use in sensitive settings is limited by privacy concerns. This paper presents In-Mig, a mobile-agent architecture that integrates LLM reasoning within agents that can migrate across organizational venues. Unlike centralized approaches, In-Mig performs reasoning in situ, ensuring that raw data remains within institutional boundaries while allowing for cross-venue synthesis. The architecture features a policy-scoped memory model, utility-driven route planning, and cryptographic trust enforcement. A prototype using JADE for mobility and quantized Mistral-7B demonstrates practical feasibility. Evaluation across various scenarios shows that In-Mig achieves More >

  • Open Access

    ARTICLE

    Blockchain-Enabled AI Recommendation Systems Using IoT-Asisted Trusted Networks

    Mekhled Alharbi1,*, Khalid Haseeb2, Mamoona Humayun3,*

    CMC-Computers, Materials & Continua, Vol.87, No.2, 2026, DOI:10.32604/cmc.2025.073832 - 12 March 2026

    Abstract The Internet of Things (IoT) and cloud computing have significantly contributed to the development of smart cities, enabling real-time monitoring, intelligent decision-making, and efficient resource management. These systems, particularly in IoT networks, rely on numerous interconnected devices that handle time-sensitive data for critical applications. In related approaches, trusted communication and reliable device interaction have been overlooked, thereby lowering security when sharing sensitive IoT data. Moreover, it incurs additional energy consumption and overhead while addressing potential threats in the dynamic environment. In this research, an Artificial Intelligence (AI) recommended fault-tolerant framework is proposed that leverages blockchain More >

  • Open Access

    ARTICLE

    Promoting psychological well-being in AI-enhanced english as a foreign language learning: A mixed-methods study of motivation, language learning anxiety and trust in higher education

    Zhiyong Sun*

    Journal of Psychology in Africa, Vol.36, No.1, pp. 33-43, 2026, DOI:10.32604/jpa.2026.074741 - 26 February 2026

    Abstract This mixed-methods study investigated how AI-enhanced English as a Foreign Language (EFL) learning environments influence students’ psychological well-being through the mediating roles of motivation and language learning anxiety and the moderating role of trust. Participants were Chinese university students (N = 310, 62% female, mean age = 18.9, SD = 0.8), of whom 15 completed interviews to both add to and to clarify the evidence from the surveys. Structural equation modeling results revealed that AI use had significant indirect effects on well-being through increased motivation and reduced language learning anxiety. Trust in AI significantly moderated… More >

  • Open Access

    ARTICLE

    Optimized Deep Learning Framework for Robust Detection of GAN-Induced Hallucinations in Medical Imaging

    Jarrar Amjad1, Muhammad Zaheer Sajid2, Mudassir Khalil3, Ayman Youssef4, Muhammad Fareed Hamid5, Imran Qureshi6,*, Haya Aldossary7, Qaisar Abbas6

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.2, 2026, DOI:10.32604/cmes.2026.073473 - 26 February 2026

    Abstract Generative Adversarial Networks (GANs) have become valuable tools in medical imaging, enabling realistic image synthesis for enhancement, augmentation, and restoration. However, their integration into clinical workflows raises concerns, particularly the risk of subtle distortions or hallucinations that may undermine diagnostic accuracy and weaken trust in AI-assisted decision-making. To address this challenge, we propose a hybrid deep learning framework designed to detect GAN-induced artifacts in medical images, thereby reinforcing the reliability of AI-driven diagnostics. The framework integrates low-level statistical descriptors, including high-frequency residuals and Gray-Level Co-occurrence Matrix (GLCM) texture features, with high-level semantic representations extracted from… More >

  • Open Access

    REVIEW

    The Transparency Revolution in Geohazard Science: A Systematic Review and Research Roadmap for Explainable Artificial Intelligence

    Moein Tosan1,*, Vahid Nourani2,3, Ozgur Kisi4,5,6, Yongqiang Zhang7, Sameh A. Kantoush8, Mekonnen Gebremichael9, Ruhollah Taghizadeh-Mehrjardi10, Jinhui Jeanne Huang11

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.1, 2026, DOI:10.32604/cmes.2025.074768 - 29 January 2026

    Abstract The integration of machine learning (ML) into geohazard assessment has successfully instigated a paradigm shift, leading to the production of models that possess a level of predictive accuracy previously considered unattainable. However, the black-box nature of these systems presents a significant barrier, hindering their operational adoption, regulatory approval, and full scientific validation. This paper provides a systematic review and synthesis of the emerging field of explainable artificial intelligence (XAI) as applied to geohazard science (GeoXAI), a domain that aims to resolve the long-standing trade-off between model performance and interpretability. A rigorous synthesis of 87 foundational… More >

  • Open Access

    ARTICLE

    Explainable Ensemble Learning Framework for Early Detection of Autism Spectrum Disorder: Enhancing Trust, Interpretability and Reliability in AI-Driven Healthcare

    Menwa Alshammeri1,2,*, Noshina Tariq3, NZ Jhanji4,5, Mamoona Humayun6, Muhammad Attique Khan7

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.1, 2026, DOI:10.32604/cmes.2025.074627 - 29 January 2026

    Abstract Artificial Intelligence (AI) is changing healthcare by helping with diagnosis. However, for doctors to trust AI tools, they need to be both accurate and easy to understand. In this study, we created a new machine learning system for the early detection of Autism Spectrum Disorder (ASD) in children. Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning. For this, we combined several different models, including Random Forest, XGBoost, and Neural Networks, into a single, more powerful framework. We used two different types More >

  • Open Access

    ARTICLE

    Building Regulatory Confidence with Human-in-the-Loop AI in Paperless GMP Validation

    Manaliben Amin*

    Journal on Artificial Intelligence, Vol.8, pp. 1-18, 2026, DOI:10.32604/jai.2026.073895 - 07 January 2026

    Abstract Artificial intelligence (AI) is steadily making its way into pharmaceutical validation, where it promises faster documentation, smarter testing strategies, and better handling of deviations. These gains are attractive, but in a regulated environment speed is never enough. Regulators want assurance that every system is reliable, that decisions are explainable, and that human accountability remains central. This paper sets out a Human-in-the-Loop (HITL) AI approach for Computer System Validation (CSV) and Computer Software Assurance (CSA). It relies on explainable AI (XAI) tools but keeps structured human review in place, so automation can be used without creating… More >

  • Open Access

    REVIEW

    Implementation of Human-AI Interaction in Reinforcement Learning: Literature Review and Case Studies

    Shaoping Xiao1,*, Zhaoan Wang1, Junchao Li2, Caden Noeller1, Jiefeng Jiang3, Jun Wang4

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-62, 2026, DOI:10.32604/cmc.2025.072146 - 09 December 2025

    Abstract The integration of human factors into artificial intelligence (AI) systems has emerged as a critical research frontier, particularly in reinforcement learning (RL), where human-AI interaction (HAII) presents both opportunities and challenges. As RL continues to demonstrate remarkable success in model-free and partially observable environments, its real-world deployment increasingly requires effective collaboration with human operators and stakeholders. This article systematically examines HAII techniques in RL through both theoretical analysis and practical case studies. We establish a conceptual framework built upon three fundamental pillars of effective human-AI collaboration: computational trust modeling, system usability, and decision understandability. Our… More >

Displaying 1-10 on page 1 of 141. Per Page