Browse by author
Lookup NU author(s): Dr Yang Long, Dr Yu GuanORCiD
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND).
© 2018 Zero-Shot Hashing (ZSH) aims to learn compact binary codes that can preserve semantic contents of the images from unseen categories. Conventional approaches project visual features to a semantic space that is shared by both seen and unseen categories. However, we observe that such a one-way paradigm suffers from the visual-semantic ambiguity problem. Namely, the semantic concepts (e.g. attributes) cannot explicitly correspond to visual patterns, and vice versa. Such a problem can lead to a huge variance in the visual features for each attribute. In this paper, we investigate how to remove such semantic ambiguity based on the observed visual appearances. In particular, we propose (1) a novel latent attribute space to mitigate the gap between visual appearances and semantic expressions; (2) a dual-graph regularised embedding algorithm called Visual-Semantic Ambiguity Removal (VSAR) that can simultaneously extract the shared components between visual and semantic information and mutually align the data distribution based on the intrinsic local structures of both spaces; (3) a new zero-shot hashing framework that can deal with both instance-level and category-level tasks. We validate our method on four popular benchmarks. Extensive experiments demonstrate that our proposed approach significantly performs the state-of-the-art methods.
Author(s): Long Y, Guan Y, Shao L
Publication type: Article
Publication status: Published
Journal: Pattern Recognition Letters
Year: 2019
Volume: 117
Pages: 186-192
Print publication date: 01/01/2019
Online publication date: 24/04/2018
Acceptance date: 16/04/2018
Date deposited: 17/01/2019
ISSN (print): 0167-8655
ISSN (electronic): 1872-7344
Publisher: Elsevier BV
URL: https://doi.org/10.1016/j.patrec.2018.04.024
DOI: 10.1016/j.patrec.2018.04.024
Altmetrics provided by Altmetric