Special Issues
Table of Content

Towards Privacy-preserving, Secure and Trustworthy AI-enabled Systems

Submission Deadline: 31 October 2025 (closed) View: 1050 Submit to Journal

Guest Editors

Prof. Weizhi Meng

Email: w.meng3@lancaster.ac.uk

Affiliation: Department of Computing and Communications, Lancaster University, Lancaster, LA1 4YW, UK

Homepage:

Research Interests: blockchain, AI, security


Dr. Chunhua Su

Email: chsu@u-aizu.ac.jp

Affiliation: Division of Computer Science, University of Aizu, Aizuwakamatsu , 965-8580, Japan

Homepage:

Research Interests: cryptography and secret sharing, IoT


Dr. Chao Chen

Email: chao.chen@rmit.edu.au

Affiliation: Department of Accounting, Info Sys & Supply Chain, RMIT University, Melbourne, 3000, Australia

Homepage:

Research Interests: cybersecurity and artificial intelligence


Summary

The great promise of AI-enabled systems is that they can improve efficiency, drive innovation, solve complex problems, and have a profound impact on the economy and society. However, realizing this potential also requires addressing technical, ethical, and societal challenges. Through the proper development and deployment of AI technology, we can create a more intelligent, efficient, and sustainable future.


However, these AI-enabled systems may suffer various security and privacy issues, including data privacy leakage, adversarial attacks, model theft, data poisoning, model bias, etc. Solving these problems requires technical and ethical efforts to ensure the security, reliability and fairness of AI systems. This special issue aims to bring together the latest research on Security, Privacy, and Robustness techniques for building privacy-preserving, secure and trustworthy AI systems.


Potential topics include but are not limited to:
· Attack and defense technology of AI-enabled systems
· Explainable AI and interpretability
· Intrusion detection and prevention in AI-enabled systems
· Privacy and data protection in AI-enabled systems
· Blockchain technology in AI-enabled systems
· Robustness in AI models
· Automated verification and testing of AI systems
· Fuzzy testing technology for AI Systems
· Privacy risk assessment technology for AI-enabled systems


Keywords

Privacy-preserving, trust management, AI, security, risk analysis

Published Papers


  • Open Access

    ARTICLE

    Privacy-Preserving Gender-Based Customer Behavior Analytics in Retail Spaces Using Computer Vision

    Ginanjar Suwasono Adi, Samsul Huda, Griffani Megiyanto Rahmatullah, Dodit Suprianto, Dinda Qurrota Aini Al-Sefy, Ivon Sandya Sari Putri, Lalu Tri Wijaya Nata Kusuma
    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-23, 2026, DOI:10.32604/cmc.2025.068619
    (This article belongs to the Special Issue: Towards Privacy-preserving, Secure and Trustworthy AI-enabled Systems)
    Abstract In the competitive retail industry of the digital era, data-driven insights into gender-specific customer behavior are essential. They support the optimization of store performance, layout design, product placement, and targeted marketing. However, existing computer vision solutions often rely on facial recognition to gather such insights, raising significant privacy and ethical concerns. To address these issues, this paper presents a privacy-preserving customer analytics system through two key strategies. First, we deploy a deep learning framework using YOLOv9s, trained on the RCA-TVGender dataset. Cameras are positioned perpendicular to observation areas to reduce facial visibility while maintaining accurate More >

  • Open Access

    ARTICLE

    Transfer Learning-Based Approach with an Ensemble Classifier for Detecting Keylogging Attack on the Internet of Things

    Yahya Alhaj Maz, Mohammed Anbar, Selvakumar Manickam, Mosleh M. Abualhaj, Sultan Ahmed Almalki, Basim Ahmad Alabsi
    CMC-Computers, Materials & Continua, Vol.85, No.3, pp. 5287-5307, 2025, DOI:10.32604/cmc.2025.068257
    (This article belongs to the Special Issue: Towards Privacy-preserving, Secure and Trustworthy AI-enabled Systems)
    Abstract The Internet of Things (IoT) is an innovation that combines imagined space with the actual world on a single platform. Because of the recent rapid rise of IoT devices, there has been a lack of standards, leading to a massive increase in unprotected devices connecting to networks. Consequently, cyberattacks on IoT are becoming more common, particularly keylogging attacks, which are often caused by security vulnerabilities on IoT networks. This research focuses on the role of transfer learning and ensemble classifiers in enhancing the detection of keylogging attacks within small, imbalanced IoT datasets. The authors propose… More >

  • Open Access

    ARTICLE

    Proactive Disentangled Modeling of Trigger–Object Pairings for Backdoor Defense

    Kyle Stein, Andrew A. Mahyari, Guillermo Francia III, Eman El-Sheikh
    CMC-Computers, Materials & Continua, Vol.85, No.1, pp. 1001-1018, 2025, DOI:10.32604/cmc.2025.068201
    (This article belongs to the Special Issue: Towards Privacy-preserving, Secure and Trustworthy AI-enabled Systems)
    Abstract Deep neural networks (DNNs) and generative AI (GenAI) are increasingly vulnerable to backdoor attacks, where adversaries embed triggers into inputs to cause models to misclassify or misinterpret target labels. Beyond traditional single-trigger scenarios, attackers may inject multiple triggers across various object classes, forming unseen backdoor-object configurations that evade standard detection pipelines. In this paper, we introduce DBOM (Disentangled Backdoor-Object Modeling), a proactive framework that leverages structured disentanglement to identify and neutralize both seen and unseen backdoor threats at the dataset level. Specifically, DBOM factorizes input image representations by modeling triggers and objects as independent primitives in the… More >

Share Link