Special lssues
Table of Content

Privacy-Preserving Technologies for Large-scale Artificial Intelligence

Submission Deadline: 30 April 2024 Submit to Special Issue

Guest Editors

Prof. Jing Qiu, Guangzhou University, China
Prof. Xiong Li, University of Electronic Science and Technology of China
A. Prof. Zhe Sun, Guangzhou University, China


The rapid growth of large-scale artificial intelligence (AI) technologies, particularly fueled by the advent of advanced deep learning models like GPT-4, has ushered in a new era of AI applications with remarkable capabilities. These models, trained on massive amounts of data, have demonstrated exceptional performance in various tasks such as natural language processing, image recognition, and recommendation systems. However, the success of these models has brought to the forefront the critical concerns of privacy and security.


As large-scale AI models continue to evolve and play an increasingly prominent role in our lives, it becomes imperative to address the privacy and security challenges they pose. The training process of these models often relies on vast amounts of personal and sensitive data, raising concerns about data breaches, unauthorized access, and potential misuse. Furthermore, deploying and using these models in real-world applications can inadvertently expose private user information, jeopardizing individual privacy rights.

While significant strides have been made in privacy-preserving techniques, there are still notable shortcomings and limitations to be addressed. One of the primary challenges lies in striking a delicate balance between the need to protect user privacy and the desire to leverage large-scale data for training high-performance AI models. Ensuring data privacy without sacrificing the utility and performance of AI systems remains an ongoing challenge.

We seek original research articles, reviews, and survey papers that address the latest developments, challenges, and solutions in this rapidly evolving field. Topics of interest include, but are not limited to:

· Privacy computing theories for large-scale AI models

· Privacy-preserving methods in fine-tuning and pre-training

· Differential privacy techniques for large-scale AI models

· Secure multi-party computation and federated learning

· Homomorphic encryption and secure inference

· Privacy-preserving data aggregation and anonymization

· Trustworthy and transparent AI systems

· Privacy-preserving transfer learning

· Privacy and fairness in large-scale AI applications

· Adversarial machine learning and privacy attacks

· Privacy-enhancing technologies (PETs) for AI in various domains (healthcare, finance, IoT, etc.)

· Regulation and policy considerations for privacy in large-scale AI

We encourage submissions that propose novel methodologies, frameworks, algorithms, and case studies that address the challenges of privacy and security in large-scale AI. Papers exploring techniques for privacy preservation in AI while maintaining utility and performance are of particular interest.


Privacy-preserving Technologies, Large-scale Artificial Intelligence, Differential Privacy, Trustworthy AI, Privacy-preserving Transfer Learning

Published Papers

  • Open Access


    Privacy-Preserving Federated Deep Learning Diagnostic Method for Multi-Stage Diseases

    Jinbo Yang, Hai Huang, Lailai Yin, Jiaxing Qu, Wanjuan Xie
    CMES-Computer Modeling in Engineering & Sciences, DOI:10.32604/cmes.2023.045417
    (This article belongs to this Special Issue: Privacy-Preserving Technologies for Large-scale Artificial Intelligence)
    Abstract Diagnosing multi-stage diseases typically requires doctors to consider multiple data sources, including clinical symptoms, physical signs, biochemical test results, imaging findings, pathological examination data, and even genetic data. When applying machine learning modeling to predict and diagnose multi-stage diseases, several challenges need to be addressed. Firstly, the model needs to handle multimodal data, as the data used by doctors for diagnosis includes image data, natural language data, and structured data. Secondly, privacy of patients’ data needs to be protected, as these data contain the most sensitive and private information. Lastly, considering the practicality of the model, the computational requirements should… More >

  • Open Access


    KSKV: Key-Strategy for Key-Value Data Collection with Local Differential Privacy

    Dan Zhao, Yang You, Chuanwen Luo, Ting Chen, Yang Liu
    CMES-Computer Modeling in Engineering & Sciences, DOI:10.32604/cmes.2023.045400
    (This article belongs to this Special Issue: Privacy-Preserving Technologies for Large-scale Artificial Intelligence)
    Abstract In recent years, the research field of data collection under local differential privacy (LDP) has expanded its focus from elementary data types to include more complex structural data, such as set-value and graph data. However, our comprehensive review of existing literature reveals that there needs to be more studies that engage with key-value data collection. Such studies would simultaneously collect the frequencies of keys and the mean of values associated with each key. Additionally, the allocation of the privacy budget between the frequencies of keys and the means of values for each key does not yield an optimal utility tradeoff.… More >

  • Open Access


    A Cloud-Fog Enabled and Privacy-Preserving IoT Data Market Platform Based on Blockchain

    Yurong Luo, Wei You, Chao Shang, Xiongpeng Ren, Jin Cao, Hui Li
    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.2, pp. 2237-2260, 2024, DOI:10.32604/cmes.2023.045679
    (This article belongs to this Special Issue: Privacy-Preserving Technologies for Large-scale Artificial Intelligence)
    Abstract The dynamic landscape of the Internet of Things (IoT) is set to revolutionize the pace of interaction among entities, ushering in a proliferation of applications characterized by heightened quality and diversity. Among the pivotal applications within the realm of IoT, as a significant example, the Smart Grid (SG) evolves into intricate networks of energy deployment marked by data integration. This evolution concurrently entails data interchange with other IoT entities. However, there are also several challenges including data-sharing overheads and the intricate establishment of trusted centers in the IoT ecosystem. In this paper, we introduce a hierarchical secure data-sharing platform empowered… More >

Share Link