Open Access iconOpen Access

ARTICLE

PrivLLM-Guard: A Differentially-Private Large Language Model for Real-Time Confidential Medical Text Generation and Summarization

Ans D. Alghamdi*

Deparmtment of Computer Science, Faculty of Computing and Information, Al-Baha University, Al-Baha, Saudi Arabia

* Corresponding Author: Ans D. Alghamdi. Email: email

(This article belongs to the Special Issue: Advances in Large Models and Domain-specific Applications)

Computers, Materials & Continua 2026, 87(3), 68 https://doi.org/10.32604/cmc.2026.075985

Abstract

How can AI assist doctors in generating clinical reports without compromising patient privacy? This question motivates our development of PrivLLM-Guard, a novel framework for differentially private large language models (LLMs) tailored to real-time confidential medical text generation and summarization. While LLMs have shown promise in automating clinical documentation, the sensitivity of healthcare data demands rigorous privacy protections. PrivLLM-Guard addresses this need by combining advanced—differential privacy techniques with adaptive noise calibration, ensuring robust privacy guarantees without sacrificing utility. The framework integrates bidirectional transformer encoders with autoregressive decoders, further enhanced by privacy-aware attention and gradient perturbation mechanisms. Extensive experiments on three large-scale medical datasets demonstrate BLEU-4 scores of 89.7% for generation and ROUGE-L scores of 92.3% for summarization, while maintaining strict privacy budgets. The model processes 512-token sequences in real time with an average latency of 245 ms and memory usage of just 4.2 GB. Compared to state-of-the-art privacy-preserving LLMs, PrivLLM-Guard improves the utility-privacy trade-off by 15.8% and reduces computational overhead by 23.4%. Key contributions include adaptive noise injection, dynamic privacy budgeting, and an integrated privacy auditing module—collectively advancing secure and trustworthy AI deployment in clinical environments.

Keywords

Differential privacy; large language models; medical text generation; privacy-preserving computing; healthcare AI; text summarization; real-time processing

Cite This Article

APA Style
Alghamdi, A.D. (2026). PrivLLM-Guard: A Differentially-Private Large Language Model for Real-Time Confidential Medical Text Generation and Summarization. Computers, Materials & Continua, 87(3), 68. https://doi.org/10.32604/cmc.2026.075985
Vancouver Style
Alghamdi AD. PrivLLM-Guard: A Differentially-Private Large Language Model for Real-Time Confidential Medical Text Generation and Summarization. Comput Mater Contin. 2026;87(3):68. https://doi.org/10.32604/cmc.2026.075985
IEEE Style
A. D. Alghamdi, “PrivLLM-Guard: A Differentially-Private Large Language Model for Real-Time Confidential Medical Text Generation and Summarization,” Comput. Mater. Contin., vol. 87, no. 3, pp. 68, 2026. https://doi.org/10.32604/cmc.2026.075985



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 232

    View

  • 57

    Download

  • 0

    Like

Share Link