TY - EJOU AU - Khan, Talha Farooq AU - Hussain, Majid AU - Arslan, Muhammad AU - Saeed, Muhammad AU - Khan, Lal AU - Chang, Hsien-Tsung TI - LLM-Based Enhanced Clustering for Low-Resource Language: An Empirical Study T2 - Computer Modeling in Engineering \& Sciences PY - 2025 VL - 145 IS - 3 SN - 1526-1506 AB - Text clustering is an important task because of its vital role in NLP-related tasks. However, existing research on clustering is mainly based on the English language, with limited work on low-resource languages, such as Urdu. Low-resource language text clustering has many drawbacks in the form of limited annotated collections and strong linguistic diversity. The primary aim of this paper is twofold: (1) By introducing a clustering dataset named UNC-2025 comprises 100k Urdu news documents, and (2) a detailed empirical standard of Large Language Model (LLM) improved clustering methods for Urdu text. We explicitly evaluate the behavior of the 11 multilingual and Urdu-specific embeddings on 3 different clustering algorithms. We carefully evaluated our performance based on a set of internal and external measurements of validity. We discover the best configuration of the mBERT embedding with the HDBSCAN algorithm that attains a new state-of-the-art performance with a high score of external validity of 0.95. This new LLM method has created a new strong standard of Urdu text clustering. Importantly, the results confirm the strength and high scalability of the LLM-generated embeddings towards the ability to generalise the fine, subtle semantics needed to discover topics in low-resource settings and open the door to novel NLP applications in underrepresented languages. KW - Large language models (LLMs); clustering; low resource language; natural language processing DO - 10.32604/cmes.2025.073021