Special lssues
Table of Content

Security, Privacy, and Robustness for Trustworthy AI Systems

Submission Deadline: 30 June 2024 Submit to Special Issue

Guest Editors

Dr. Yongjun Ren, Nanjing University of Information Science and Technology, China.
Dr. Weizhi Meng, Technical University of Denmark, Denmark.
Dr. Chunhua Su, University of Aizu, Japan.
Dr. Chao Chen, RMIT University, Australia.

Summary

As artificial intelligence (AI) technology continues to penetrate and be applied in social, economic, and life fields, researchers have become increasingly concerned about the security issues of AI. Despite its immense potential, AI technology, particularly deep learning, is plagued by problems such as robustness, model backdoor, fairness, and privacy. Given the high complexity and difficulty in interpreting neural network models, detecting and defending against these security risks remains a significant challenge. This is particularly critical in safety-related fields such as aerospace, intelligent medicine, and unmanned aerial vehicles, where the credibility, reliability, and interpretability of AI are of utmost importance. Thus, ensuring the safety of AI has become a crucial trend and hotspot of research both domestically and abroad.

 

This special issue aims to bring together the latest security research on Security, Privacy, and Robustness techniques for trustworthy AI systems. We also welcome the authors to introduce other recent advances addressing the above issues.

 

Potential topics include but are not limited to:

  • Attack and defense technology of AI systems

  • Explainable AI and interpretability

  • Fairness, bias, and discrimination in AI systems

  • Privacy and data protection in AI systems

  • Security and privacy in federated learning

  • Robustness in federated learning model

  • Automated verification and testing of AI systems

  • Fuzzy testing technology for AI Systems

  • Privacy risk assessment technology for AI systems

  • Application of AI in software engineering and information security


Keywords

artificial intelligence, federated learning, robustness, security, privacy

Published Papers


  • Open Access

    ARTICLE

    FL-EASGD: Federated Learning Privacy Security Method Based on Homomorphic Encryption

    Hao Sun, Xiubo Chen, Kaiguo Yuan
    CMC-Computers, Materials & Continua, DOI:10.32604/cmc.2024.049159
    (This article belongs to the Special Issue: Security, Privacy, and Robustness for Trustworthy AI Systems)
    Abstract Federated learning ensures data privacy and security by sharing models among multiple computing nodes instead of plaintext data. However, there is still a potential risk of privacy leakage, for example, attackers can obtain the original data through model inference attacks. Therefore, safeguarding the privacy of model parameters becomes crucial. One proposed solution involves incorporating homomorphic encryption algorithms into the federated learning process. However, the existing federated learning privacy protection scheme based on homomorphic encryption will greatly reduce the efficiency and robustness when there are performance differences between parties or abnormal nodes. To solve the above problems, this paper proposes a… More >

  • Open Access

    ARTICLE

    Differentially Private Support Vector Machines with Knowledge Aggregation

    Teng Wang, Yao Zhang, Jiangguo Liang, Shuai Wang, Shuanggen Liu
    CMC-Computers, Materials & Continua, Vol.78, No.3, pp. 3891-3907, 2024, DOI:10.32604/cmc.2024.048115
    (This article belongs to the Special Issue: Security, Privacy, and Robustness for Trustworthy AI Systems)
    Abstract With the widespread data collection and processing, privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals. Support vector machine (SVM) is one of the most elementary learning models of machine learning. Privacy issues surrounding SVM classifier training have attracted increasing attention. In this paper, we investigate Differential Privacy-compliant Federated Machine Learning with Dimensionality Reduction, called FedDPDR-DPML, which greatly improves data utility while providing strong privacy guarantees. Considering in distributed learning scenarios, multiple participants usually hold unbalanced or small amounts of data. Therefore, FedDPDR-DPML enables multiple participants to collaboratively learn a global model based on weighted… More >

  • Open Access

    ARTICLE

    Research on Data Tampering Prevention Method for ATC Network Based on Zero Trust

    Xiaoyan Zhu, Ruchun Jia, Tingrui Zhang, Song Yao
    CMC-Computers, Materials & Continua, Vol.78, No.3, pp. 4363-4377, 2024, DOI:10.32604/cmc.2023.045615
    (This article belongs to the Special Issue: Security, Privacy, and Robustness for Trustworthy AI Systems)
    Abstract The traditional air traffic control information sharing data has weak security characteristics of personal privacy data and poor effect, which is easy to leads to the problem that the data is usurped. Starting from the application of the ATC (automatic train control) network, this paper focuses on the zero trust and zero trust access strategy and the tamper-proof method of information-sharing network data. Through the improvement of ATC’s zero trust physical layer authentication and network data distributed feature differentiation calculation, this paper reconstructs the personal privacy scope authentication structure and designs a tamper-proof method of ATC’s information sharing on the… More >

Share Link