TY - EJOU AU - Pan, Ronghao AU - Bernal-Beltrán, Tomás AU - Rodríguez-González, Alejandro AU - Menasalvas-Ruíz, Ernestina AU - Valencia-García, Rafael TI - Evaluating Spanish Medical Entity Recognition: Large Language Models with Prompting versus Fine-Tuning T2 - Computers, Materials \& Continua PY - 2026 VL - 87 IS - 3 SN - 1546-2226 AB - The digitization of healthcare has resulted in the production of large amounts of structured and unstructured clinical data, creating the need for accurate and efficient named entity recognition (NER) to support medical procedures. This study evaluates and compares three approaches to NER in the medical domain in Spanish: using Large Language Models (LLMs) with In-Context Learning techniques (Zero-Shot, Few-Shot, and Chain-of-Thought); fine-tuning of LLMs; and fine-tuning of encoder-only models. Experiments were conducted on the Meddocan, Meddoprof, Meddoplace and Symptemist benchmark datasets. Fine-tuned encoder-only models achieve the best performance across all datasets, reaching macro-F1 scores of up to 76.71 on Meddocan, 71.51 on Meddoplace, 66.07 on Meddoprof and 63.50 on Symptemist. While LLMs with prompting offer flexibility and require no task-specific training, their performance varies significantly depending on the entity type. In addition, we evaluated fine-tuning of LLMs using QLoRA, but the improvements were limited due to the small amount of training data available per entity type, which made model adaptation less effective. KW - Named entity recognition; medical entity detection; large language models; transformers; prompt-tuning; fine-tuning; in-context learning; natural language processing DO - 10.32604/cmc.2026.077501