Abstract
We present a new approach for solving the single channel speech separation with the aid of an user-generated exemplar source that is recorded from a microphone. Our method deviates from the conventional model-based methods, which highly rely on speaker dependent training data. We readdress the problem by offering a new approach based on utterance dependent patterns extracted from the user-generated exemplar source. Our proposed approach is less restrictive, and does not require speaker dependent information and yet exceeds the performance of conventional model-based separation methods in separating male and male speech mixtures. We combine general speaker-independent (SI) features with specifically generated utterance-dependent (UD) features in a joint probability model. The UD features are initially extracted from the user-generated exemplar source and represented as statistical estimates. These estimates are calibrated based on information extracted from the mixture source to statistically represent the target source. The UD probability model is subsequently generated to target problems of ambiguity and to offer better cues for separation. The proposed algorithm is tested and compared with recent method using the GRID database and the Mocha-TIMIT database.
Original language | English |
---|---|
Pages (from-to) | 2087-2100 |
Number of pages | 14 |
Journal | IEEE/ACM Transactions on Audio Speech and Language Processing |
Volume | 22 |
Issue number | 12 |
Early online date | 12 Sept 2014 |
DOIs | |
Publication status | Published - Dec 2014 |
Keywords
- Concurrent pitch tracking, exemplar assistance
- Factorial hiddenMarkovmodel(FHMM)
- Gaussian mixturemodel (GMM)
- Informed Source Separation (ISS)
- Single-channel source separation (SCSS)
- Speaker-assisted source separation