Image-to-Text Description Approach based on Deep Learning Models

Main Article Content

Muhanad Hameed Arif

Abstract

The image-to-text description can be indicated by creating captions for images that comply with human language perception. Nowadays, with the speedy progress of deep learning models, image-to-text description (or image captioning) has an expanding consideration by numerous researchers in diverse artificial intelligence relevant applications. In general, accurately getting the semantic information of the principal objects in the images and captioning the association among them represents a crucial issue in this field. In this paper, an image-to-text description approach based on Inception-ResNetV2-LSTM with an attention technique is proposed for effective textual descriptions of images.


In this proposed approach, Inception-ResNetV2 is exploited to extract essential features, and the integration of LSTM with the attention technique is implemented as a sentence-creation model in such a way that the learning could be concentrated on specific portions within the images, hence enhancing the performance of image-to-text description approach. In terms of the Meteor and BLEU (1-4) measurements, the proposed approach outperformed other state-of-the-art approaches with 0.787 and (0.977, 0.964, 0.886, and 0.759), respectively

Downloads

Download data is not yet available.

Article Details

How to Cite
Image-to-Text Description Approach based on Deep Learning Models. (2024). Bilad Alrafidain Journal for Engineering Science and Technology, 3(1), 33-46. https://doi.org/10.56990/bajest/2024.030103
Section
Articles

How to Cite

Image-to-Text Description Approach based on Deep Learning Models. (2024). Bilad Alrafidain Journal for Engineering Science and Technology, 3(1), 33-46. https://doi.org/10.56990/bajest/2024.030103