Browse by author
Lookup NU author(s): Professor Bin Gao,
Dr Long Jiang,
Dr Wai Lok Woo
This is the authors' accepted manuscript of an article that has been published in its final definitive form by IEEE, 2018.
For re-use rights please refer to the publisher's terms and conditions.
OAPA With increase of stress in work and study environments, mental health issue has become a major subject in current social interaction research. Generally, researchers analyze psychological health states by using the social perception behavior. Speech signal processing is an important research direction as it can objectively assess the mental health of a person from social sensing through the extraction and analysis of speech features. In this paper, a series of four-week long-term social monitoring experiment study using the proposed wearable device has been conducted. A set of Wellbeing questionnaires among of a group of students is employed to objectively generate a relationship between physical and mental health with segmented speech-social features in completely natural daily situation. In particular, we have developed transfer learning for acoustic classification. By training the model on TUT Acoustic Scenes 2017 dataset, the model learns the basic scene features. Through transfer learning, the model is transferred to the audio segmentation process using only four wearable speech-social features (Energy, Entropy, Brightness, Formant). The obtained results have shown promising results in classifying various acoustic scenes in unconstrained and natural situations using the wearable long-term speech-social dataset.
Author(s): Chen Y, Gao B, Jiang L, Yin K, Gu J, Woo WL
Publication type: Article
Publication status: Published
Journal: IEEE Access
Online publication date: 15/10/2018
Acceptance date: 02/04/2018
Date deposited: 30/10/2018
ISSN (electronic): 2169-3536
Altmetrics provided by Altmetric