Special Issues
Table of Content

Large Language Models in Password Authentication Security: Challenges, Solutions and Future Directions

Submission Deadline: 30 April 2026 View: 447 Submit to Special Issue

Guest Editors

Dr. Weizheng Wang

Email: weizheng.wang@ieee.org

Affiliation: Department of Electronic Engineering, The Hong Kong Polytechnic University, Hong Kong SAR

Homepage:

Research Interests: information security and privacy, password-based authentication, large language model security


Prof. Zhaoyang Han

Email: zyhan@njfu.edu.cn

Affiliation: The College of Computer Science and Technology, Nanjing Forestry University, Nanjing, 210037, China

Homepage:

Research Interests: information security and privacy, password-based authentication, large language model security


Prof. Chunhua Su

Email: chsu@u-aizu.ac.jp

Affiliation: School of Computer Science and Engineering, University of Aizu, Aizu-Wakamatsu, Fukushima, 965-8580, Japan

Homepage:

Research Interests: big data privacy protection, IoT security and privacy, cryptanalysis, cryptographic protocols, privacy-preserving technologies in data mining, RFID authentication, wireless mesh networks security


Summary

The rapid deployment of Large Language Models (LLMs) in authentication systems has introduced unprecedented security challenges while simultaneously offering innovative solutions for password-based security. As organizations increasingly integrate LLMs into identity verification processes, critical vulnerabilities including prompt injection attacks, model poisoning, and privacy leakage threaten the fundamental security assumptions of authentication systems, making robust security frameworks for LLM-enabled password authentication an urgent research priority.


This special issue aims to explore cutting-edge research addressing the intersection of Large Language Models and password authentication security. We seek comprehensive investigations into both offensive and defensive aspects of LLM-based authentication systems, covering theoretical foundations, practical implementations, and empirical evaluations. The issue welcomes contributions from computer science, cybersecurity, cryptography, and human-computer interaction communities, emphasizing research that advances the security posture of LLM-enabled authentication systems. Topics range from novel security architectures to case studies of real-world deployments, with particular emphasis on solutions that balance usability with robust security guarantees.


To address these multifaceted challenges, this special issue focuses on the following key research areas, including but not limited to:
· LLM-Enhanced Password Security Mechanisms
· Prompt Injection Attacks on Authentication Systems
· Privacy-Preserving LLM Authentication Frameworks
· Adversarial Machine Learning in Password Verification
· Multi-Factor Authentication with LLM Integration
· Federated Learning for Secure Password Analysis
· Bias and Fairness in LLM-Based Authentication


Keywords

Large Language Models, Password Authentication, AI Security, Authentication Systems

Share Link