Abstract
The auto-encoder model plays a crucial role in the success of deep learning. During the pre-training phase, auto-encoders learn a representation that helps improve the performance of the entire neural network during the fine-tuning phase of deep learning. However, the learned representation is not always meaningful and the network does not necessarily achieve higher performance with such representation because auto-encoders are trained in an unsupervised manner without knowing the specific task targeted in the fine-tuning phase. In this paper, we propose a novel approach to train auto-encoders by adding an explicit guiding term to the traditional reconstruction cost function that encourages the auto-encoder to learn meaningful features. Particularly, the guiding term is the classification error with respect to the representation learned by the auto-encoder, and a meaningful representation means that a network using the representation as input has a low classification error in a classification task. In our experiments, we show that the additional explicit guiding term helps the auto-encoder understand the prospective target in advance. During learning, it can drive the learning toward a minimum with better generalization with respect to the particular supervised task on the dataset. Over a range of image classification benchmarks, we achieve equal or superior results to baseline auto-encoders with the same configuration.
Original language | English |
---|---|
Pages (from-to) | 429-436 |
Number of pages | 8 |
Journal | Neural Computing and Applications |
Volume | 28 |
Issue number | 3 |
Early online date | 20 Oct 2015 |
DOIs | |
Publication status | Published - Mar 2017 |
Keywords
- Auto-encoders
- Deep learning
- Representation learning
- Neural network