Transfer Learning for Wearable Long-Term Social Speech Evaluations

Yuanpeng Chen, Bin Gao, Long Jiang, Kai Yin, Jun Gu, Wai Lok Woo

Research output: Contribution to journalArticlepeer-review

13 Citations (Scopus)
38 Downloads (Pure)

Abstract

With an increase of stress in work and study environments, mental health issue has become a major subject in current social interaction research. Generally, researchers analyze psychological health states by using the social perception behavior. Speech signal processing is an important research direction, as it can objectively assess the mental health of a person from social sensing through the extraction and analysis of speech features. In this paper, a series of four-week long-term social monitoring experiment study using the proposed wearable device has been conducted. A set of well-being questionnaires among of a group of students is employed to objectively generate a relationship between physical and mental health with segmented speech-social features in completely natural daily situation. In particular, we have developed transfer learning for acoustic classification. By training the model on the TUT Acoustic Scenes 2017 data set, the model learns the basic scene features. Through transfer learning, the model is transferred to the audio segmentation process using only four wearable speech-social features (energy, entropy, brightness, and formant). The obtained results have shown promising results in classifying various acoustic scenes in unconstrained and natural situations using the wearable long-term speech-social data set.
Original languageEnglish
Pages (from-to)61305-61316
Number of pages12
JournalIEEE Access
Volume6
DOIs
Publication statusPublished - 15 Oct 2018

Keywords

  • Long-term social monitoring
  • psychological
  • transfer learning

Fingerprint

Dive into the research topics of 'Transfer Learning for Wearable Long-Term Social Speech Evaluations'. Together they form a unique fingerprint.

Cite this