Browse by author
Lookup NU author(s): Dr Yang Long
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
© 2019, Springer Science+Business Media, LLC, part of Springer Nature. Zero-shot learning (ZSL) now has gained a great deal of focus due to its ability of recognizing unseen categories by training with samples of only seen categories. Existing efforts have been devoted to learn a projection between semantic space and feature space, which has made a big progress in ZSL. However, simply establishing a projection often suffers from the visual semantic ambiguity problem and hubness problem. Specifically, visual patterns and semantic concepts often can not properly match each other, and lead to inaccurate recognition result. To this end, in this paper, we propose a novel ZSL model, namely Asymmetric Graph-based Zero Shot Learning (AGZSL), to simultaneously preserve class level semantic manifold and instance level visual manifold in a latent space. In addition, to make the model more discriminative, we also constrain the latent space to be orthogonal, which means that the projected visual features and semantic embeddings in the latent space are orthogonal when they belong to different categories. We test our approach on four benchmark datasets under both standard zero-shot setting and more realistic generalized zero-shot learning (GZSL) setting, and the results show that our AGZSL can significantly improve the final performance comparing to the state-of-the-art methods.
Author(s): Wang Y, Zhang H, Zhang Z, Long Y
Publication type: Article
Publication status: Published
Journal: Multimedia Tools and Applications
Year: 2019
Issue: ePub ahead of Print
Online publication date: 14/05/2019
Acceptance date: 24/04/2019
ISSN (print): 1380-7501
ISSN (electronic): 1573-7721
Publisher: Springer New York LLC
URL: https://doi.org/10.1007/s11042-019-7689-y
DOI: 10.1007/s11042-019-7689-y
Altmetrics provided by Altmetric