Browse by author
Lookup NU author(s): Tailin Chen
This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).
© 2022. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. For human gesture recognition task, recent fully supervised deep learning models have achieved impressive performance when sufficient samples of predefined gesture classes are provided. However, these models do not generalise well for new classes, thus limiting the model accuracy on unforeseen gesture categories. Few-shot learning based human gesture recognition (FSL-HGR) addresses this problem by supporting faster learning using only a few samples from new gesture classes. In this paper, we develop a novel FSL-HGR method which enables energy-efficient inference across large number of classes. Specifically, we adapt a surrogate gradient-based spiking neural network model to efficiently process video sequences collected via dynamic vision sensors. With a focus on energy-efficiency, we design two strategies, spiking noise suppression and emission sparsity learning, to significantly reduce the spike emission rate in all layers of the network. Additionally, we introduce a dual-speed stream contrastive learning to achieve high accuracy without increasing computational burden associated with inference using dual stream processing. Our experimental results demonstrate the effectiveness of our approach. We achieve state-of-ate-art 84.75%, and 92.82% accuracy on 5way-1shot and 5way-5shot learning task with 60.02% and 58.21% reduced spike emission number respectively compared to a standard SNN architecture without using our learning strategies when processing the DVS128 Gesture dataset.
Author(s): Jing L, Wang Y, Chen T, Dora S, Ji Z, Fang H
Publication type: Conference Proceedings (inc. Abstract)
Publication status: Published
Conference Name: 33rd British Machine Vision Conference Proceedings (BMVC 2022)
Year of Conference: 2022
Online publication date: 21/11/2022
Acceptance date: 02/04/2018
Date deposited: 17/11/2023
Publisher: British Machine Vision Association