Semi-supervised vision-language mapping via variational learning

Yuming Shen, Li Zhang, Ling Shao

Research output: Chapter in Book/Report/Conference proceedingChapter

3 Citations (Scopus)

Abstract

Understanding the semantic relations between vision and language data has become a research trend in artificial intelligence and robotic systems. The lack of training data is an essential issue for vision-language understanding. We address the problem of image and sentence cross-modal retrieval when paired training samples are not sufficient. Inspired by recent works in variational inference, in this paper, the autoencoding variational Bayes framework is novelly extended to a semi-supervised model for image-sentence mapping task. Our method does not require all training images and sentences to be paired. The proposed model is an end-to-end system, and consists of a two-level variational embedding structure where unpaired data are involved in the first level embedding to give support to intra-modality statistics so that the lower bound of the joint marginal likelihood of paired data embeddings can be better approximated. The proposed retrieval model is evaluated on two popular datasets, i.e. Flickr30K and Flickr8K, producing superior performances compared with related state-of-the-art methods.
Original languageEnglish
Title of host publication2017 IEEE International Conference on Robotics and Automation (ICRA)
Place of PublicationPiscataway
PublisherIEEE
Pages1349-1354
ISBN (Print)978-1-5090-4634-8
DOIs
Publication statusE-pub ahead of print - 24 Jul 2017

Fingerprint

Dive into the research topics of 'Semi-supervised vision-language mapping via variational learning'. Together they form a unique fingerprint.

Cite this