Toggle Main Menu Toggle Search

Open Access padlockePrints

Zero-shot leaning and hashing with binary visual similes

Lookup NU author(s): Dr Yang Long



This is the authors' accepted manuscript of an article that has been published in its final definitive form by Springer New York LLC, 2019.

For re-use rights please refer to the publisher's terms and conditions.


© 2018, Springer Science+Business Media, LLC, part of Springer Nature. Conventional zero-shot learning methods usually learn mapping functions to project image features into semantic embedding spaces, in which to find the nearest neighbors with predefined attributes. The predefined attributes including both seen classes and unseen classes are often annotated with high dimensional real values by experts, which costs a lot of human labors. In this paper, we propose a simple but effective method to reduce the annotation work. In our strategy, only unseen classes are needed to be annotated with several binary codes, which lead to only about one percent of original annotation work. In addition, we design a Visual Similes Annotation System (ViSAS) to annotate the unseen classes, and build both linear and deep mapping models and test them on four popular datasets, the experimental results show that our method can outperform the state-of-the-art methods in most circumstances.

Publication metadata

Author(s): Zhang H, Long Y, Shao L

Publication type: Article

Publication status: Published

Journal: Multimedia Tools and Applications

Year: 2019

Volume: 78

Issue: 17

Pages: 24147-24165

Print publication date: 01/09/2019

Online publication date: 16/11/2018

Acceptance date: 05/11/2018

Date deposited: 10/01/2019

ISSN (print): 1380-7501

ISSN (electronic): 1573-7721

Publisher: Springer New York LLC


DOI: 10.1007/s11042-018-6842-3


Altmetrics provided by Altmetric