Vol.63, No.3, 2020, pp.1575-1589, doi:10.32604/cmc.2020.07451
OPEN ACCESS
ARTICLE
Visual Relationship Detection with Contextual Information
  • Yugang Li1, 2, *, Yongbin Wang1, Zhe Chen2, Yuting Zhu3
1 School of Computer and Cyberspace Security, Communication University of China, Beijing, 100024, China.
2 Academy of Broadcasting Science, Beijing, 100866, China.
3 School of Electrical and Electronic Engineering, Nanyang Technological University, Nanyang Avenue, 639798, Singapore.
* Corresponding Author: Yugang Li. Email: .
Received 21 May 2019; Accepted 01 July 2019; Issue published 30 April 2020
Abstract
Understanding an image goes beyond recognizing and locating the objects in it, the relationships between objects also very important in image understanding. Most previous methods have focused on recognizing local predictions of the relationships. But real-world image relationships often determined by the surrounding objects and other contextual information. In this work, we employ this insight to propose a novel framework to deal with the problem of visual relationship detection. The core of the framework is a relationship inference network, which is a recurrent structure designed for combining the global contextual information of the object to infer the relationship of the image. Experimental results on Stanford VRD and Visual Genome demonstrate that the proposed method achieves a good performance both in efficiency and accuracy. Finally, we demonstrate the value of visual relationship on two computer vision tasks: image retrieval and scene graph generation.
Keywords
Visual relationship, deep learning, gated recurrent units, image retrieval, contextual information.
Cite This Article
. , "Visual relationship detection with contextual information," Computers, Materials & Continua, vol. 63, no.3, pp. 1575–1589, 2020.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.