Special Issues
Table of Content

Security, Privacy, and Robustness for Trustworthy AI Systems

Submission Deadline: 30 June 2024 (closed) View: 285 Submit to Special Issue

Guest Editors

Dr. Yongjun Ren, Nanjing University of Information Science and Technology, China.
Dr. Weizhi Meng, Technical University of Denmark, Denmark.
Dr. Chunhua Su, University of Aizu, Japan.
Dr. Chao Chen, RMIT University, Australia.

Summary

As artificial intelligence (AI) technology continues to penetrate and be applied in social, economic, and life fields, researchers have become increasingly concerned about the security issues of AI. Despite its immense potential, AI technology, particularly deep learning, is plagued by problems such as robustness, model backdoor, fairness, and privacy. Given the high complexity and difficulty in interpreting neural network models, detecting and defending against these security risks remains a significant challenge. This is particularly critical in safety-related fields such as aerospace, intelligent medicine, and unmanned aerial vehicles, where the credibility, reliability, and interpretability of AI are of utmost importance. Thus, ensuring the safety of AI has become a crucial trend and hotspot of research both domestically and abroad.

 

This special issue aims to bring together the latest security research on Security, Privacy, and Robustness techniques for trustworthy AI systems. We also welcome the authors to introduce other recent advances addressing the above issues.

 

Potential topics include but are not limited to:

  • Attack and defense technology of AI systems

  • Explainable AI and interpretability

  • Fairness, bias, and discrimination in AI systems

  • Privacy and data protection in AI systems

  • Security and privacy in federated learning

  • Robustness in federated learning model

  • Automated verification and testing of AI systems

  • Fuzzy testing technology for AI Systems

  • Privacy risk assessment technology for AI systems

  • Application of AI in software engineering and information security


Keywords

artificial intelligence, federated learning, robustness, security, privacy

Published Papers


  • Open Access

    ARTICLE

    A Model for Detecting Fake News by Integrating Domain-Specific Emotional and Semantic Features

    Wen Jiang, Mingshu Zhang, Xu'an Wang, Wei Bin, Xiong Zhang, Kelan Ren, Facheng Yan
    CMC-Computers, Materials & Continua, DOI:10.32604/cmc.2024.053762
    (This article belongs to the Special Issue: Security, Privacy, and Robustness for Trustworthy AI Systems)
    Abstract With the rapid spread of Internet information and the spread of fake news, the detection of fake news becomes more and more important. Traditional detection methods often rely on a single emotional or semantic feature to identify fake news, but these methods have limitations when dealing with news in specific domains. In order to solve the problem of weak feature correlation between data from different domains, a model for detecting fake news by integrating domain-specific emotional and semantic features is proposed. This method makes full use of the attention mechanism, grasps the correlation between different… More >

  • Open Access

    ARTICLE

    Enhancing AI System Privacy: An Automatic Tool for Achieving GDPR Compliance in NoSQL Databases

    Yifei Zhao, Zhaohui Li, Siyi Lv
    CMC-Computers, Materials & Continua, Vol.80, No.1, pp. 217-234, 2024, DOI:10.32604/cmc.2024.052310
    (This article belongs to the Special Issue: Security, Privacy, and Robustness for Trustworthy AI Systems)
    Abstract The EU’s Artificial Intelligence Act (AI Act) imposes requirements for the privacy compliance of AI systems. AI systems must comply with privacy laws such as the GDPR when providing services. These laws provide users with the right to issue a Data Subject Access Request (DSAR). Responding to such requests requires database administrators to identify information related to an individual accurately. However, manual compliance poses significant challenges and is error-prone. Database administrators need to write queries through time-consuming labor. The demand for large amounts of data by AI systems has driven the development of NoSQL databases.… More >

  • Open Access

    ARTICLE

    A Gaussian Noise-Based Algorithm for Enhancing Backdoor Attacks

    Hong Huang, Yunfei Wang, Guotao Yuan, Xin Li
    CMC-Computers, Materials & Continua, Vol.80, No.1, pp. 361-387, 2024, DOI:10.32604/cmc.2024.051633
    (This article belongs to the Special Issue: Security, Privacy, and Robustness for Trustworthy AI Systems)
    Abstract Deep Neural Networks (DNNs) are integral to various aspects of modern life, enhancing work efficiency. Nonetheless, their susceptibility to diverse attack methods, including backdoor attacks, raises security concerns. We aim to investigate backdoor attack methods for image categorization tasks, to promote the development of DNN towards higher security. Research on backdoor attacks currently faces significant challenges due to the distinct and abnormal data patterns of malicious samples, and the meticulous data screening by developers, hindering practical attack implementation. To overcome these challenges, this study proposes a Gaussian Noise-Targeted Universal Adversarial Perturbation (GN-TUAP) algorithm. This approach… More >

  • Open Access

    ARTICLE

    CrossLinkNet: An Explainable and Trustworthy AI Framework for Whole-Slide Images Segmentation

    Peng Xiao, Qi Zhong, Jingxue Chen, Dongyuan Wu, Zhen Qin, Erqiang Zhou
    CMC-Computers, Materials & Continua, Vol.79, No.3, pp. 4703-4724, 2024, DOI:10.32604/cmc.2024.049791
    (This article belongs to the Special Issue: Security, Privacy, and Robustness for Trustworthy AI Systems)
    Abstract In the intelligent medical diagnosis area, Artificial Intelligence (AI)’s trustworthiness, reliability, and interpretability are critical, especially in cancer diagnosis. Traditional neural networks, while excellent at processing natural images, often lack interpretability and adaptability when processing high-resolution digital pathological images. This limitation is particularly evident in pathological diagnosis, which is the gold standard of cancer diagnosis and relies on a pathologist’s careful examination and analysis of digital pathological slides to identify the features and progression of the disease. Therefore, the integration of interpretable AI into smart medical diagnosis is not only an inevitable technological trend but… More >

  • Open Access

    ARTICLE

    EG-STC: An Efficient Secure Two-Party Computation Scheme Based on Embedded GPU for Artificial Intelligence Systems

    Zhenjiang Dong, Xin Ge, Yuehua Huang, Jiankuo Dong, Jiang Xu
    CMC-Computers, Materials & Continua, Vol.79, No.3, pp. 4021-4044, 2024, DOI:10.32604/cmc.2024.049233
    (This article belongs to the Special Issue: Security, Privacy, and Robustness for Trustworthy AI Systems)
    Abstract This paper presents a comprehensive exploration into the integration of Internet of Things (IoT), big data analysis, cloud computing, and Artificial Intelligence (AI), which has led to an unprecedented era of connectivity. We delve into the emerging trend of machine learning on embedded devices, enabling tasks in resource-limited environments. However, the widespread adoption of machine learning raises significant privacy concerns, necessitating the development of privacy-preserving techniques. One such technique, secure multi-party computation (MPC), allows collaborative computations without exposing private inputs. Despite its potential, complex protocols and communication interactions hinder performance, especially on resource-constrained devices. Efforts… More >

  • Open Access

    ARTICLE

    Robust Information Hiding Based on Neural Style Transfer with Artificial Intelligence

    Xiong Zhang, Minqing Zhang, Xu An Wang, Wen Jiang, Chao Jiang, Pan Yang
    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 1925-1938, 2024, DOI:10.32604/cmc.2024.050899
    (This article belongs to the Special Issue: Security, Privacy, and Robustness for Trustworthy AI Systems)
    Abstract This paper proposes an artificial intelligence-based robust information hiding algorithm to address the issue of confidential information being susceptible to noise attacks during transmission. The algorithm we designed aims to mitigate the impact of various noise attacks on the integrity of secret information during transmission. The method we propose involves encoding secret images into stylized encrypted images and applies adversarial transfer to both the style and content features of the original and embedded data. This process effectively enhances the concealment and imperceptibility of confidential information, thereby improving the security of such information during transmission and… More >

  • Open Access

    ARTICLE

    FL-EASGD: Federated Learning Privacy Security Method Based on Homomorphic Encryption

    Hao Sun, Xiubo Chen, Kaiguo Yuan
    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 2361-2373, 2024, DOI:10.32604/cmc.2024.049159
    (This article belongs to the Special Issue: Security, Privacy, and Robustness for Trustworthy AI Systems)
    Abstract Federated learning ensures data privacy and security by sharing models among multiple computing nodes instead of plaintext data. However, there is still a potential risk of privacy leakage, for example, attackers can obtain the original data through model inference attacks. Therefore, safeguarding the privacy of model parameters becomes crucial. One proposed solution involves incorporating homomorphic encryption algorithms into the federated learning process. However, the existing federated learning privacy protection scheme based on homomorphic encryption will greatly reduce the efficiency and robustness when there are performance differences between parties or abnormal nodes. To solve the above… More >

  • Open Access

    ARTICLE

    Trusted Certified Auditor Using Cryptography for Secure Data Outsourcing and Privacy Preservation in Fog-Enabled VANETs

    Nagaraju Pacharla, K. Srinivasa Reddy
    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 3089-3110, 2024, DOI:10.32604/cmc.2024.048133
    (This article belongs to the Special Issue: Security, Privacy, and Robustness for Trustworthy AI Systems)
    Abstract With the recent technological developments, massive vehicular ad hoc networks (VANETs) have been established, enabling numerous vehicles and their respective Road Side Unit (RSU) components to communicate with one another. The best way to enhance traffic flow for vehicles and traffic management departments is to share the data they receive. There needs to be more protection for the VANET systems. An effective and safe method of outsourcing is suggested, which reduces computation costs by achieving data security using a homomorphic mapping based on the conjugate operation of matrices. This research proposes a VANET-based data outsourcing… More >

  • Open Access

    ARTICLE

    Differentially Private Support Vector Machines with Knowledge Aggregation

    Teng Wang, Yao Zhang, Jiangguo Liang, Shuai Wang, Shuanggen Liu
    CMC-Computers, Materials & Continua, Vol.78, No.3, pp. 3891-3907, 2024, DOI:10.32604/cmc.2024.048115
    (This article belongs to the Special Issue: Security, Privacy, and Robustness for Trustworthy AI Systems)
    Abstract With the widespread data collection and processing, privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals. Support vector machine (SVM) is one of the most elementary learning models of machine learning. Privacy issues surrounding SVM classifier training have attracted increasing attention. In this paper, we investigate Differential Privacy-compliant Federated Machine Learning with Dimensionality Reduction, called FedDPDR-DPML, which greatly improves data utility while providing strong privacy guarantees. Considering in distributed learning scenarios, multiple participants usually hold unbalanced or small amounts of data. Therefore, FedDPDR-DPML enables multiple participants to collaboratively learn a global… More >

  • Open Access

    ARTICLE

    Research on Data Tampering Prevention Method for ATC Network Based on Zero Trust

    Xiaoyan Zhu, Ruchun Jia, Tingrui Zhang, Song Yao
    CMC-Computers, Materials & Continua, Vol.78, No.3, pp. 4363-4377, 2024, DOI:10.32604/cmc.2023.045615
    (This article belongs to the Special Issue: Security, Privacy, and Robustness for Trustworthy AI Systems)
    Abstract The traditional air traffic control information sharing data has weak security characteristics of personal privacy data and poor effect, which is easy to leads to the problem that the data is usurped. Starting from the application of the ATC (automatic train control) network, this paper focuses on the zero trust and zero trust access strategy and the tamper-proof method of information-sharing network data. Through the improvement of ATC’s zero trust physical layer authentication and network data distributed feature differentiation calculation, this paper reconstructs the personal privacy scope authentication structure and designs a tamper-proof method of… More >

Share Link