Multi-view Temporal Ensemble for Classification of Non-Stationary Signals

Bee Hock David Koh, Wai Lok Woo

Research output: Contribution to journalArticlepeer-review

13 Citations (Scopus)
34 Downloads (Pure)

Abstract

In the classification of non-stationary time series data such as sounds, it is often tedious and expensive to get a training set that is representative of the target concept. To alleviate this problem, the proposed method treats the outputs of a number of deep learning sub-models as the views of the same target concept that can be linearly combined according to their complementarity. It is proposed that the view's complementarity be the contribution of the view to the global view, chosen in this paper to be the Laplacian eigenmap of the combined data. Complementarity is computed by alternate optimization, a process that involves the cost function of the Laplacian eigenmap and the weights of the linear combination. By blending the views in this way, a more complete view of the underlying phenomenon can be made available to the final classifier. Better generalization is obtained, as the consensus between the views reduces the variance while the increase in the discriminatory information reduces the bias. The data experiment with artificial views of environment sounds formed by deep learning structures of different configurations shows that the proposed method can improve the classification performance.

Original languageEnglish
Pages (from-to)32482-32491
Number of pages10
JournalIEEE Access
Volume7
Early online date7 Mar 2019
DOIs
Publication statusPublished - 26 Mar 2019

Keywords

  • deep learning
  • data fusion
  • time series classification

Fingerprint

Dive into the research topics of 'Multi-view Temporal Ensemble for Classification of Non-Stationary Signals'. Together they form a unique fingerprint.

Cite this