Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (137)
  • Open Access

    ARTICLE

    Promoting psychological well-being in AI-enhanced english as a foreign language learning: A mixed-methods study of motivation, language learning anxiety and trust in higher education

    Zhiyong Sun*

    Journal of Psychology in Africa, Vol.36, No.1, pp. 33-43, 2026, DOI:10.32604/jpa.2026.074741 - 26 February 2026

    Abstract This mixed-methods study investigated how AI-enhanced English as a Foreign Language (EFL) learning environments influence students’ psychological well-being through the mediating roles of motivation and language learning anxiety and the moderating role of trust. Participants were Chinese university students (N = 310, 62% female, mean age = 18.9, SD = 0.8), of whom 15 completed interviews to both add to and to clarify the evidence from the surveys. Structural equation modeling results revealed that AI use had significant indirect effects on well-being through increased motivation and reduced language learning anxiety. Trust in AI significantly moderated… More >

  • Open Access

    ARTICLE

    Optimized Deep Learning Framework for Robust Detection of GAN-Induced Hallucinations in Medical Imaging

    Jarrar Amjad1, Muhammad Zaheer Sajid2, Mudassir Khalil3, Ayman Youssef4, Muhammad Fareed Hamid5, Imran Qureshi6,*, Haya Aldossary7, Qaisar Abbas6

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.2, 2026, DOI:10.32604/cmes.2026.073473 - 26 February 2026

    Abstract Generative Adversarial Networks (GANs) have become valuable tools in medical imaging, enabling realistic image synthesis for enhancement, augmentation, and restoration. However, their integration into clinical workflows raises concerns, particularly the risk of subtle distortions or hallucinations that may undermine diagnostic accuracy and weaken trust in AI-assisted decision-making. To address this challenge, we propose a hybrid deep learning framework designed to detect GAN-induced artifacts in medical images, thereby reinforcing the reliability of AI-driven diagnostics. The framework integrates low-level statistical descriptors, including high-frequency residuals and Gray-Level Co-occurrence Matrix (GLCM) texture features, with high-level semantic representations extracted from… More >

  • Open Access

    REVIEW

    The Transparency Revolution in Geohazard Science: A Systematic Review and Research Roadmap for Explainable Artificial Intelligence

    Moein Tosan1,*, Vahid Nourani2,3, Ozgur Kisi4,5,6, Yongqiang Zhang7, Sameh A. Kantoush8, Mekonnen Gebremichael9, Ruhollah Taghizadeh-Mehrjardi10, Jinhui Jeanne Huang11

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.1, 2026, DOI:10.32604/cmes.2025.074768 - 29 January 2026

    Abstract The integration of machine learning (ML) into geohazard assessment has successfully instigated a paradigm shift, leading to the production of models that possess a level of predictive accuracy previously considered unattainable. However, the black-box nature of these systems presents a significant barrier, hindering their operational adoption, regulatory approval, and full scientific validation. This paper provides a systematic review and synthesis of the emerging field of explainable artificial intelligence (XAI) as applied to geohazard science (GeoXAI), a domain that aims to resolve the long-standing trade-off between model performance and interpretability. A rigorous synthesis of 87 foundational… More >

  • Open Access

    ARTICLE

    Explainable Ensemble Learning Framework for Early Detection of Autism Spectrum Disorder: Enhancing Trust, Interpretability and Reliability in AI-Driven Healthcare

    Menwa Alshammeri1,2,*, Noshina Tariq3, NZ Jhanji4,5, Mamoona Humayun6, Muhammad Attique Khan7

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.1, 2026, DOI:10.32604/cmes.2025.074627 - 29 January 2026

    Abstract Artificial Intelligence (AI) is changing healthcare by helping with diagnosis. However, for doctors to trust AI tools, they need to be both accurate and easy to understand. In this study, we created a new machine learning system for the early detection of Autism Spectrum Disorder (ASD) in children. Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning. For this, we combined several different models, including Random Forest, XGBoost, and Neural Networks, into a single, more powerful framework. We used two different types More >

  • Open Access

    ARTICLE

    Building Regulatory Confidence with Human-in-the-Loop AI in Paperless GMP Validation

    Manaliben Amin*

    Journal on Artificial Intelligence, Vol.8, pp. 1-18, 2026, DOI:10.32604/jai.2026.073895 - 07 January 2026

    Abstract Artificial intelligence (AI) is steadily making its way into pharmaceutical validation, where it promises faster documentation, smarter testing strategies, and better handling of deviations. These gains are attractive, but in a regulated environment speed is never enough. Regulators want assurance that every system is reliable, that decisions are explainable, and that human accountability remains central. This paper sets out a Human-in-the-Loop (HITL) AI approach for Computer System Validation (CSV) and Computer Software Assurance (CSA). It relies on explainable AI (XAI) tools but keeps structured human review in place, so automation can be used without creating… More >

  • Open Access

    REVIEW

    Implementation of Human-AI Interaction in Reinforcement Learning: Literature Review and Case Studies

    Shaoping Xiao1,*, Zhaoan Wang1, Junchao Li2, Caden Noeller1, Jiefeng Jiang3, Jun Wang4

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-62, 2026, DOI:10.32604/cmc.2025.072146 - 09 December 2025

    Abstract The integration of human factors into artificial intelligence (AI) systems has emerged as a critical research frontier, particularly in reinforcement learning (RL), where human-AI interaction (HAII) presents both opportunities and challenges. As RL continues to demonstrate remarkable success in model-free and partially observable environments, its real-world deployment increasingly requires effective collaboration with human operators and stakeholders. This article systematically examines HAII techniques in RL through both theoretical analysis and practical case studies. We establish a conceptual framework built upon three fundamental pillars of effective human-AI collaboration: computational trust modeling, system usability, and decision understandability. Our… More >

  • Open Access

    ARTICLE

    Resilient Security Framework for Lottery and Betting Kiosks under Ransomware Attacks

    Sapan Pandya*

    Journal of Cyber Security, Vol.7, pp. 637-651, 2025, DOI:10.32604/jcs.2025.073670 - 24 December 2025

    Abstract Ransomware has evolved from opportunistic malware into a global economic weapon, crippling critical services and extracting billions in illicit revenue. While most research has centered on enterprise networks and healthcare systems, an equally vulnerable frontier is emerging in lottery and betting kiosks—self-service financial Internet of Things (IoT) devices that handle billions of dollars annually. These terminals operate unattended, rely on legacy operating systems, and interact with sensitive transactional data, making them prime ransomware targets. This paper introduces a Resilient Security Framework (RSF) for kiosks under ransomware threat conditions. RSF integrates three defensive layers: (1) prevention… More >

  • Open Access

    REVIEW

    Next-Generation Lightweight Explainable AI for Cybersecurity: A Review on Transparency and Real-Time Threat Mitigation

    Khulud Salem Alshudukhi1,*, Sijjad Ali2, Mamoona Humayun3,*, Omar Alruwaili4

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 3029-3085, 2025, DOI:10.32604/cmes.2025.073705 - 23 December 2025

    Abstract Problem: The integration of Artificial Intelligence (AI) into cybersecurity, while enhancing threat detection, is hampered by the “black box” nature of complex models, eroding trust, accountability, and regulatory compliance. Explainable AI (XAI) aims to resolve this opacity but introduces a critical new vulnerability: the adversarial exploitation of model explanations themselves. Gap: Current research lacks a comprehensive synthesis of this dual role of XAI in cybersecurity—as both a tool for transparency and a potential attack vector. There is a pressing need to systematically analyze the trade-offs between interpretability and security, evaluate defense mechanisms, and outline a… More >

  • Open Access

    ARTICLE

    Trust-Aware AI-Enabled Edge Framework for Intelligent Traffic Control in Cyber-Physical Systems

    Khalid Haseeb1, Imran Qureshi2,*, Naveed Abbas1, Muhammad Ali3, Muhammad Arif Shah4, Qaisar Abbas2

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 4349-4362, 2025, DOI:10.32604/cmes.2025.072326 - 23 December 2025

    Abstract The rapid evolution of smart cities has led to the deployment of Cyber-Physical IoT Systems (CPS-IoT) for real-time monitoring, intelligent decision-making, and efficient resource management, particularly in intelligent transportation and vehicular networks. Edge intelligence plays a crucial role in these systems by enabling low-latency processing and localized optimization for dynamic, data-intensive, and vehicular environments. However, challenges such as high computational overhead, uneven load distribution, and inefficient utilization of communication resources significantly hinder scalability and responsiveness. Our research presents a robust framework that integrates artificial intelligence and edge-level traffic prediction for CPS-IoT systems. Distributed computing for More >

  • Open Access

    ARTICLE

    Calibrating Trust in Generative Artificial Intelligence: A Human-Centered Testing Framework with Adaptive Explainability

    Sewwandi Tennakoon1, Eric Danso1, Zhenjie Zhao2,*

    Journal on Artificial Intelligence, Vol.7, pp. 517-547, 2025, DOI:10.32604/jai.2025.072628 - 01 December 2025

    Abstract Generative Artificial Intelligence (GenAI) systems have achieved remarkable capabilities across text, code, and image generation; however, their outputs remain prone to errors, hallucinations, and biases. Users often overtrust these outputs due to limited transparency, which can lead to misuse and decision errors. This study addresses the challenge of calibrating trust in GenAI through a human centered testing framework enhanced with adaptive explainability. We introduce a methodology that adjusts explanations dynamically according to user expertise, model output confidence, and contextual risk factors, providing guidance that is informative but not overwhelming. The framework was evaluated using outputs… More >

Displaying 1-10 on page 1 of 137. Per Page