ChanEstNet: A Deep Learning Based Channel Estimation for High-Speed Scenarios

Yong Liao, Yuanxiao Hua, Xuewu Dai, Haimei Yao, Xinyi Yang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

69 Citations (Scopus)
391 Downloads (Pure)


Aiming at the problem that the downlink channel estimation performance is limited due to the fast time-varying and non-stationary characteristics in the high-speed mobile scenarios, we propose a channel estimation network based on deep learning, called ChanEstNet. ChanEstNet uses the convolutional neural network (CNN) to extract channel response feature vectors and recurrent neural network (RNN) for channel estimation. We use a large amount of high-speed channel data to conduct offline training for the learning network, fully exploit the channel information in the training sample, make it learn the characteristics of fast time-varying and non-stationary channels, and better track the features of channels changing in high-speed environments. The simulation results show that in the high-speed mobile scenarios, compared with the traditional methods, the proposed channel estimation method has low computational complexity and significant performance improvement.

Original languageEnglish
Title of host publicationICC 2019 - 2019 IEEE International Conference on Communications (ICC 2019) - Proceedings
Subtitle of host publicationShanghai, China, 20-24 May 2019
Place of PublicationPiscataway, NJ
Number of pages6
ISBN (Electronic)9781538680889
ISBN (Print)9781538680896
Publication statusPublished - May 2019
Event2019 IEEE International Conference on Communications, ICC 2019 - Shanghai, China
Duration: 20 May 201924 May 2019

Publication series

NameIEEE International Conference on Communications
ISSN (Print)1550-3607


Conference2019 IEEE International Conference on Communications, ICC 2019


Dive into the research topics of 'ChanEstNet: A Deep Learning Based Channel Estimation for High-Speed Scenarios'. Together they form a unique fingerprint.

Cite this