Open Access
ARTICLE
TGI-FPR: An Improved Multi-Label Password Guessing Model
1 School of Cyberspace Security (School of Cryptology), Hainan University, Haikou, 570228, China
2 Laboratory for Advanced Computing and Intelligence Engineering, Wuxi, 214100, China
3 Jiangsu Variable Supercomputer Technology Co., Ltd., Wuxi, 214100, China
* Corresponding Author: Shuai Liu. Email:
Computers, Materials & Continua 2025, 84(1), 463-490. https://doi.org/10.32604/cmc.2025.063862
Received 26 January 2025; Accepted 29 April 2025; Issue published 09 June 2025
Abstract
TarGuess-I is a leading model utilizing Personally Identifiable Information for online targeted password guessing. Due to its remarkable guessing performance, the model has drawn considerable attention in password security research. However, through an analysis of the vulnerable behavior of users when constructing passwords by combining popular passwords with their Personally Identifiable Information, we identified that the model fails to consider popular passwords and frequent substrings, and it uses overly broad personal information categories, with extensive duplicate statistics. To address these issues, we propose an improved password guessing model, TGI-FPR, which incorporates three semantic methods: (1) identification of popular passwords by generating top 300 lists from similar websites, (2) use of frequent substrings as new grammatical labels to capture finer-grained password structures, and (3) further subdivision of the six major categories of personal information. To evaluate the performance of the proposed model, we conducted experiments on six large-scale real-world password leak datasets and compared its accuracy within the first 100 guesses to that of TarGuess-I. The results indicate a 2.65% improvement in guessing accuracy.Keywords
Cite This Article

This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.