Aligning Salient Objects to Queries: A Multi-modal and Multi-object Image Retrieval Framework
Dey, S; Dutta, A; Ghosh, SK; et al.Valveny, E; Lladós, J; Pal, U
Date: 2 June 2019
Publisher
Springer Verlag
Publisher DOI
Abstract
In this paper we propose an approach for multi-modal image retrieval in multi-labelled images. A multi-modal deep network architecture is formulated to jointly model sketches and text as input query modalities into a common embedding space, which is then further aligned with the image feature space. Our architecture also relies on a ...
In this paper we propose an approach for multi-modal image retrieval in multi-labelled images. A multi-modal deep network architecture is formulated to jointly model sketches and text as input query modalities into a common embedding space, which is then further aligned with the image feature space. Our architecture also relies on a salient object detection through a supervised LSTM-based visual attention model learned from convolutional features. Both the alignment between the queries and the image and the supervision of the attention on the images are obtained by generalizing the Hungarian Algorithm using different loss functions. This permits encoding the object-based features and its alignment with the query irrespective of the availability of the co-occurrence of different objects in the training set. We validate the performance of our approach on standard single/multi-object datasets, showing state-of-the art performance in every dataset.
Computer Science
Faculty of Environment, Science and Economy
Item views 0
Full item downloads 0