TY - JOUR
T1 - Spectrogram based multi-task audio classification
AU - Zeng, Yuni
AU - Mao, Hua
AU - Peng, Dezhong
AU - Yi, Zhang
PY - 2019/2
Y1 - 2019/2
N2 - Audio classification is regarded as a great challenge in pattern recognition. Although audio classification tasks are always treated as independent tasks, tasks are essentially related to each other such as speakers’ accent and speakers’ identification. In this paper, we propose a Deep Neural Network (DNN)-based multi-task model that exploits such relationships and deals with multiple audio classification tasks simultaneously. We term our model as the gated Residual Networks (GResNets) model since it integrates Deep Residual Networks (ResNets) with a gate mechanism, which extract better representations between tasks compared with Convolutional Neural Networks (CNNs). Specifically, two multiplied convolutional layers are used to replace two feed-forward convolution layers in the ResNets. We tested our model on multiple audio classification tasks and found that our multi-task model achieves higher accuracy than task-specific models which train the models separately.
AB - Audio classification is regarded as a great challenge in pattern recognition. Although audio classification tasks are always treated as independent tasks, tasks are essentially related to each other such as speakers’ accent and speakers’ identification. In this paper, we propose a Deep Neural Network (DNN)-based multi-task model that exploits such relationships and deals with multiple audio classification tasks simultaneously. We term our model as the gated Residual Networks (GResNets) model since it integrates Deep Residual Networks (ResNets) with a gate mechanism, which extract better representations between tasks compared with Convolutional Neural Networks (CNNs). Specifically, two multiplied convolutional layers are used to replace two feed-forward convolution layers in the ResNets. We tested our model on multiple audio classification tasks and found that our multi-task model achieves higher accuracy than task-specific models which train the models separately.
KW - Multi-task learning
KW - Convolutional neural networks
KW - Deep residual networks
KW - Audio classification
U2 - 10.1007/s11042-017-5539-3
DO - 10.1007/s11042-017-5539-3
M3 - Article
SN - 1380-7501
VL - 78
SP - 3705
EP - 3722
JO - Multimedia Tools and Applications
JF - Multimedia Tools and Applications
IS - 3
ER -