Learning a good representation with unsymmetrical auto-encoder

Yanan Sun, Hua Mao, Quan Guo, Zhang Yi

Research output: Contribution to journalArticlepeer-review

19 Citations (Scopus)
36 Downloads (Pure)

Abstract

Auto-encoders play a fundamental role in unsupervised feature learning and learning initial parameters of deep architectures for supervised tasks. For given input samples, robust features are used to generate robust representations from two perspectives: (1) invariant to small variation of samples and (2) reconstruction by decoders with minimal error. Traditional auto-encoders with different regularization terms have symmetrical numbers of encoder and decoder layers, and sometimes parameters. We investigate the relation between the number of layers and propose an unsymmetrical structure, i.e., an unsymmetrical auto-encoder (UAE), to learn more effective features. We present empirical results of feature learning using the UAE and state-of-the-art auto-encoders for classification tasks with a range of datasets. We also analyze the gradient vanishing problem mathematically and provide suggestions for the appropriate number of layers to use in UAEs with a logistic activation function. In our experiments, UAEs demonstrated superior performance with the same configuration compared to other auto-encoders.
Original languageEnglish
Pages (from-to)1361-1367
Number of pages7
JournalNeural Computing and Applications
Volume27
Issue number5
Early online date24 Jul 2015
DOIs
Publication statusPublished - Jul 2016

Keywords

  • Auto-encoder
  • Neural networks
  • Feature learning
  • Deep learning
  • Unsupervised learning

Fingerprint

Dive into the research topics of 'Learning a good representation with unsymmetrical auto-encoder'. Together they form a unique fingerprint.

Cite this