Browse by author
Lookup NU author(s): Dr Cong Zhang
This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).
Inducing semantic representations directly from speech signals is a highly challenging task but has many useful applications in speech mining and spoken language understanding. This study tackles the unsupervised learning of semantic representations for spoken utterances. Through converting speech signals into hidden units generated from acoustic unit discovery, we propose WavEmbed, a multimodal sequential autoencoder that predicts hidden units from a dense representation of speech. Secondly, we also propose S-HuBERT to induce meaning through knowledge distillation, in which a sentence embedding model is first trained on hidden units and passes its knowledge to a speech encoder through contrastive learning. The best performing model achieves a moderate correlation (0.5 0.6) with human judgments, without relying on any labels or transcriptions. Furthermore, these models can also be easily extended to leverage textual transcriptions of speech to learn much better speech embeddings that are strongly correlated with human annotations. Our proposed methods are applicable to the development of purely data-driven systems for speech mining, indexing and search.
Author(s): Zhu J, Tian Z, Liu Y, Zhang C, Lo C
Publication type: Article
Publication status: Published
Journal: Findings of the Association for Computational Linguistics
Year: 2022
Pages: 1134-1154
Print publication date: 01/12/2022
Acceptance date: 01/12/2022
Date deposited: 14/03/2023
Publisher: Association for Computational Linguistics
URL: https://aclanthology.org/2022.findings-emnlp.81.pdf