Home / Journals / CMC / Online First / doi:10.32604/cmc.2025.074557
Special Issues
Table of Content

Open Access

ARTICLE

PROMPTx-PE: Adaptive Optimization of Prompt Engineering Strategies for Accuracy and Robustness in Large Language Models

Talha Farooq Khan1, Fahad Ali2, Majid Hussain1, Lal Khan3,*, Hsien-Tsung Chang4,5,6,*
1 Department of Computer Science, The University of Faisalabad, Faisalabad, 38000, Pakistan
2 Department of Computer Science, The University of Southern Punjab, Multan, 60640, Pakistan
3 Department of AI and SW, Gachon University, Seongnam, 13120, Republic of Korea
4 Department of Artificial Intelligence, Chang Gung University, Linkou, Taoyuan, 333, Taiwan
5 Department of Computer Science and Information Engineering, Chang Gung University, Linkou, Taoyuan, 333, Taiwan
6 Center for Artificial Intelligence in Medicine, Chang Gung Memorial Hospital at Linkou, Taoyuan, 333, Taiwan
* Corresponding Author: Lal Khan. Email: email; Hsien-Tsung Chang. Email: email

Computers, Materials & Continua https://doi.org/10.32604/cmc.2025.074557

Received 13 October 2025; Accepted 09 December 2025; Published online 14 January 2026

Abstract

The outstanding growth in the applications of large language models (LLMs) demonstrates the significance of adaptive and efficient prompt engineering tactics. The existing methods may not be variable, vigorous and streamlined in different domains. The offered study introduces an immediate optimization outline, named PROMPTx-PE, that is going to yield a greater level of precision and strength when it comes to the assignments that are premised on LLM. The proposed system features a timely selection scheme which is informed by reinforcement learning, a contextual layer and a dynamic weighting module which is regulated by Lyapunov-based stability guidelines. The PROMPTx-PE dynamically varies the exploration and exploitation of the prompt space, depending on real-time feedback and multi-objective reward development. Extensive testing on both benchmark (GLUE, SuperGLUE) and domain-specific data (Healthcare-QA and Industrial-NER) demonstrates a large best performance to be 89.4% and a strong robustness disconnect with under 3% computation expense. The results confirm the effectiveness, consistency, and scalability of PROMPTx-PE as a platform of adaptive prompt engineering based on recent uses of LLMs.

Keywords

Prompt engineering; large language models; adaptive optimization; robustness; multi-objective optimization; reinforcement learning; natural language processing
  • 162

    View

  • 27

    Download

  • 0

    Like

Share Link