TY - JOUR
T1 - Explicit guiding auto-encoders for learning meaningful representation
AU - Sun, Yanan
AU - Mao, Hua
AU - Sang, Yongsheng
AU - Yi, Zhang
PY - 2017/3
Y1 - 2017/3
N2 - The auto-encoder model plays a crucial role in the success of deep learning. During the pre-training phase, auto-encoders learn a representation that helps improve the performance of the entire neural network during the fine-tuning phase of deep learning. However, the learned representation is not always meaningful and the network does not necessarily achieve higher performance with such representation because auto-encoders are trained in an unsupervised manner without knowing the specific task targeted in the fine-tuning phase. In this paper, we propose a novel approach to train auto-encoders by adding an explicit guiding term to the traditional reconstruction cost function that encourages the auto-encoder to learn meaningful features. Particularly, the guiding term is the classification error with respect to the representation learned by the auto-encoder, and a meaningful representation means that a network using the representation as input has a low classification error in a classification task. In our experiments, we show that the additional explicit guiding term helps the auto-encoder understand the prospective target in advance. During learning, it can drive the learning toward a minimum with better generalization with respect to the particular supervised task on the dataset. Over a range of image classification benchmarks, we achieve equal or superior results to baseline auto-encoders with the same configuration.
AB - The auto-encoder model plays a crucial role in the success of deep learning. During the pre-training phase, auto-encoders learn a representation that helps improve the performance of the entire neural network during the fine-tuning phase of deep learning. However, the learned representation is not always meaningful and the network does not necessarily achieve higher performance with such representation because auto-encoders are trained in an unsupervised manner without knowing the specific task targeted in the fine-tuning phase. In this paper, we propose a novel approach to train auto-encoders by adding an explicit guiding term to the traditional reconstruction cost function that encourages the auto-encoder to learn meaningful features. Particularly, the guiding term is the classification error with respect to the representation learned by the auto-encoder, and a meaningful representation means that a network using the representation as input has a low classification error in a classification task. In our experiments, we show that the additional explicit guiding term helps the auto-encoder understand the prospective target in advance. During learning, it can drive the learning toward a minimum with better generalization with respect to the particular supervised task on the dataset. Over a range of image classification benchmarks, we achieve equal or superior results to baseline auto-encoders with the same configuration.
KW - Auto-encoders
KW - Deep learning
KW - Representation learning
KW - Neural network
U2 - 10.1007/s00521-015-2082-x
DO - 10.1007/s00521-015-2082-x
M3 - Article
VL - 28
SP - 429
EP - 436
JO - Neural Computing and Applications
JF - Neural Computing and Applications
SN - 0941-0643
IS - 3
ER -