TY - EJOU AU - Thobhani, Alaa AU - Zou, Beiji AU - Kui, Xiaoyan AU - Abdussalam, Amr AU - Asim, Muhammad AU - ELAffendi, Mohammed AU - Shah, Sajid TI - A Novelty Framework in Image-Captioning with Visual Attention-Based Refined Visual Features T2 - Computers, Materials \& Continua PY - 2025 VL - 82 IS - 3 SN - 1546-2226 AB - Image captioning, the task of generating descriptive sentences for images, has advanced significantly with the integration of semantic information. However, traditional models still rely on static visual features that do not evolve with the changing linguistic context, which can hinder the ability to form meaningful connections between the image and the generated captions. This limitation often leads to captions that are less accurate or descriptive. In this paper, we propose a novel approach to enhance image captioning by introducing dynamic interactions where visual features continuously adapt to the evolving linguistic context. Our model strengthens the alignment between visual and linguistic elements, resulting in more coherent and contextually appropriate captions. Specifically, we introduce two innovative modules: the Visual Weighting Module (VWM) and the Enhanced Features Attention Module (EFAM). The VWM adjusts visual features using partial attention, enabling dynamic reweighting of the visual inputs, while the EFAM further refines these features to improve their relevance to the generated caption. By continuously adjusting visual features in response to the linguistic context, our model bridges the gap between static visual features and dynamic language generation. We demonstrate the effectiveness of our approach through experiments on the MS-COCO dataset, where our method outperforms state-of-the-art techniques in terms of caption quality and contextual relevance. Our results show that dynamic visual-linguistic alignment significantly enhances image captioning performance. KW - Image-captioning; visual attention; deep learning; visual features DO - 10.32604/cmc.2025.060788