Open Access iconOpen Access

ARTICLE

crossmark

A Concise and Varied Visual Features-Based Image Captioning Model with Visual Selection

Alaa Thobhani1,*, Beiji Zou1, Xiaoyan Kui1, Amr Abdussalam2, Muhammad Asim3, Naveed Ahmed4, Mohammed Ali Alshara4,5

1 School of Computer Science and Engineering, Central South University, Changsha, 410083, China
2 Electronic Engineering and Information Science Department, University of Science and Technology of China, Hefei, 230026, China
3 EIAS Data Science Lab, College of Computer and Information Sciences, Prince Sultan University, Riyadh, 11586, Saudi Arabia
4 College of Computer and Information Sciences, Prince Sultan University, Riyadh, 11586, Saudi Arabia
5 College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University, Riyadh, 11432, Saudi Arabia

* Corresponding Author: Alaa Thobhani. Email: email

Computers, Materials & Continua 2024, 81(2), 2873-2894. https://doi.org/10.32604/cmc.2024.054841

Abstract

Image captioning has gained increasing attention in recent years. Visual characteristics found in input images play a crucial role in generating high-quality captions. Prior studies have used visual attention mechanisms to dynamically focus on localized regions of the input image, improving the effectiveness of identifying relevant image regions at each step of caption generation. However, providing image captioning models with the capability of selecting the most relevant visual features from the input image and attending to them can significantly improve the utilization of these features. Consequently, this leads to enhanced captioning network performance. In light of this, we present an image captioning framework that efficiently exploits the extracted representations of the image. Our framework comprises three key components: the Visual Feature Detector module (VFD), the Visual Feature Visual Attention module (VFVA), and the language model. The VFD module is responsible for detecting a subset of the most pertinent features from the local visual features, creating an updated visual features matrix. Subsequently, the VFVA directs its attention to the visual features matrix generated by the VFD, resulting in an updated context vector employed by the language model to generate an informative description. Integrating the VFD and VFVA modules introduces an additional layer of processing for the visual features, thereby contributing to enhancing the image captioning model’s performance. Using the MS-COCO dataset, our experiments show that the proposed framework competes well with state-of-the-art methods, effectively leveraging visual representations to improve performance. The implementation code can be found here: (accessed on 30 July 2024).

Keywords


Cite This Article

APA Style
Thobhani, A., Zou, B., Kui, X., Abdussalam, A., Asim, M. et al. (2024). A concise and varied visual features-based image captioning model with visual selection. Computers, Materials & Continua, 81(2), 2873-2894. https://doi.org/10.32604/cmc.2024.054841
Vancouver Style
Thobhani A, Zou B, Kui X, Abdussalam A, Asim M, Ahmed N, et al. A concise and varied visual features-based image captioning model with visual selection. Comput Mater Contin. 2024;81(2):2873-2894 https://doi.org/10.32604/cmc.2024.054841
IEEE Style
A. Thobhani et al., “A Concise and Varied Visual Features-Based Image Captioning Model with Visual Selection,” Comput. Mater. Contin., vol. 81, no. 2, pp. 2873-2894, 2024. https://doi.org/10.32604/cmc.2024.054841



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 313

    View

  • 145

    Download

  • 0

    Like

Share Link