TY - JOUR
T1 - Transfer Learning for Wearable Long-Term Social Speech Evaluations
AU - Chen, Yuanpeng
AU - Gao, Bin
AU - Jiang, Long
AU - Yin, Kai
AU - Gu, Jun
AU - Woo, Wai Lok
PY - 2018/10/15
Y1 - 2018/10/15
N2 - With an increase of stress in work and study environments, mental health issue has become a major subject in current social interaction research. Generally, researchers analyze psychological health states by using the social perception behavior. Speech signal processing is an important research direction, as it can objectively assess the mental health of a person from social sensing through the extraction and analysis of speech features. In this paper, a series of four-week long-term social monitoring experiment study using the proposed wearable device has been conducted. A set of well-being questionnaires among of a group of students is employed to objectively generate a relationship between physical and mental health with segmented speech-social features in completely natural daily situation. In particular, we have developed transfer learning for acoustic classification. By training the model on the TUT Acoustic Scenes 2017 data set, the model learns the basic scene features. Through transfer learning, the model is transferred to the audio segmentation process using only four wearable speech-social features (energy, entropy, brightness, and formant). The obtained results have shown promising results in classifying various acoustic scenes in unconstrained and natural situations using the wearable long-term speech-social data set.
AB - With an increase of stress in work and study environments, mental health issue has become a major subject in current social interaction research. Generally, researchers analyze psychological health states by using the social perception behavior. Speech signal processing is an important research direction, as it can objectively assess the mental health of a person from social sensing through the extraction and analysis of speech features. In this paper, a series of four-week long-term social monitoring experiment study using the proposed wearable device has been conducted. A set of well-being questionnaires among of a group of students is employed to objectively generate a relationship between physical and mental health with segmented speech-social features in completely natural daily situation. In particular, we have developed transfer learning for acoustic classification. By training the model on the TUT Acoustic Scenes 2017 data set, the model learns the basic scene features. Through transfer learning, the model is transferred to the audio segmentation process using only four wearable speech-social features (energy, entropy, brightness, and formant). The obtained results have shown promising results in classifying various acoustic scenes in unconstrained and natural situations using the wearable long-term speech-social data set.
KW - Long-term social monitoring
KW - psychological
KW - transfer learning
U2 - 10.1109/ACCESS.2018.2876122
DO - 10.1109/ACCESS.2018.2876122
M3 - Article
VL - 6
SP - 61305
EP - 61316
JO - IEEE Access
JF - IEEE Access
SN - 2169-3536
ER -