Open Access iconOpen Access

ARTICLE

Privacy-Preserving Transformer Inference with Optimized Homomorphic Encryption and Secure Collaborative Computing

Tao Bai1, Yang Tang2, Kuan Shao3, Zhenyong Zhang3,*, Yuanteng Liu4

1 Guizhou Provincial Meteorological Data Center, Guiyang, China
2 Technical Department of the People’s Procuratorate of Guizhou Province, Guiyang, China
3 College of Computer Science and Technology, Guizhou University, Guiyang, China
4 Colorful Guizhou Digital Technology Co., Ltd., Guiyang, China

* Corresponding Author: Zhenyong Zhang. Email: email

Computers, Materials & Continua 2026, 88(1), 52 https://doi.org/10.32604/cmc.2026.078473

Abstract

In recent years, the rapid development of artificial intelligence has greatly promoted the application of Machine Learning as a Service (MLaaS). Users can upload their requirements through front-end applications, and the server provides model inference services after receiving the user input. However, MLaaS may lead to serious privacy breaches. Large language model services are typical representatives of MLaaS, and the Transformer is a typical structure in large language models. Therefore, this paper proposes a privacy-protected Transformer inference scheme based on the CKKS fully homomorphic encryption scheme to optimize computational and communication efficiency. Firstly, this paper implements efficient matrix multiplication based on ring multiplication and optimizes the matrix partition parameters to adapt to different types (including ciphertext-plaintext and ciphertext-ciphertext) and different matrix dimensions. Secondly, this paper optimizes and designs secure Softmax, LayerNorm, and Gelu protocols based on parameter fuzzing and collaborative computing to perform efficient, secure atomic computations over ciphertexts. Finally, experiments on text classification were conducted on the IMDB and AGNEWS datasets. The results show that, under our experimental settings (including an AMD Ryzen 7 5700G CPU with 32 GB RAM and 8-thread parallel computing using the Lattigo library), the scheme proposed in this paper completes the inference process within 3 s, with communication costs below 1 GB, and the computing accuracy is comparable to that of plaintext computing.

Keywords

Machine learning as a service; privacy preservation; Transformer; collaborative computing

Cite This Article

APA Style
Bai, T., Tang, Y., Shao, K., Zhang, Z., Liu, Y. (2026). Privacy-Preserving Transformer Inference with Optimized Homomorphic Encryption and Secure Collaborative Computing. Computers, Materials & Continua, 88(1), 52. https://doi.org/10.32604/cmc.2026.078473
Vancouver Style
Bai T, Tang Y, Shao K, Zhang Z, Liu Y. Privacy-Preserving Transformer Inference with Optimized Homomorphic Encryption and Secure Collaborative Computing. Comput Mater Contin. 2026;88(1):52. https://doi.org/10.32604/cmc.2026.078473
IEEE Style
T. Bai, Y. Tang, K. Shao, Z. Zhang, and Y. Liu, “Privacy-Preserving Transformer Inference with Optimized Homomorphic Encryption and Secure Collaborative Computing,” Comput. Mater. Contin., vol. 88, no. 1, pp. 52, 2026. https://doi.org/10.32604/cmc.2026.078473



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 419

    View

  • 57

    Download

  • 0

    Like

Share Link