Ensemble System of Deep Neural Networks for Single-Channel Audio Separation

Musab T. S. Al-Kaltakchi, Ahmad Saeed Mohammad*, Wai Lok Woo

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)
17 Downloads (Pure)

Abstract

Speech separation is a well-known problem, especially when there is only one sound mixture available. Estimating the Ideal Binary Mask (IBM) is one solution to this problem. Recent research has focused on the supervised classification approach. The challenge of extracting features from the sources is critical for this method. Speech separation has been accomplished by using a variety of feature extraction models. The majority of them, however, are concentrated on a single feature. The complementary nature of various features have not been thoroughly investigated. In this paper, we propose a deep neural network (DNN) ensemble architecture to completely explore the complimentary nature of the diverse features obtained from raw acoustic features. We examined the penultimate discriminative representations instead of employing the features acquired from the output layer. The learned representations were also fused to produce a new features vector, which was then classified by using the Extreme Learning Machine (ELM). In addition, a genetic algorithm (GA) was created to optimize the parameters globally. The results of the experiments showed that our proposed system completely considered various features and produced a high-quality IBM under different conditions.
Original languageEnglish
Article number352
Number of pages24
JournalInformation
Volume14
Issue number7
DOIs
Publication statusPublished - 21 Jun 2023

Keywords

  • single-channel audio separation
  • deep neural networks
  • ideal binary mask
  • feature fusion

Cite this