Inverse-GMM: A Latency Distribution Shaping Method for Industrial Cooperative Deep Learning Systems

Fei Qin, Yucong Xiao, Xian Sun, Xuewu Dai, Wuxiong Zhang, Fei Shen*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

The front deployed deep learning is a promising technology of the next generation industrial applications, which can extract essential information from high dimension sensors. However, part of these heavy computation tasks at resource constrained front devices have to be offloaded to the edge or cloud devices, which forms the cooperative deep learning system through the exchange of intermediate data. The inference efficiency of cooperative deep learning system will then be highly correlated with the communication latency caused by the non-stationary industrial multipath-rich fading channel. This paper proposes a novel method to control the distribution of communications latency, which is able to support efficient cooperative deep learning architecture in the harsh industrial environment. The proposed method is essentially an inverse process of Gaussian Mixture Model (GMM), which adjusts latency samples to approach the given arbitrary shape function. To achieve this objective, a new variation of Expectation-Maximization (EM) algorithm in analytical domain is derived to decompose arbitrary distribution shape with multiple Gaussian kernels and an optimized stochastic resource allocation algorithm is proposed to approximate each Gaussian kernels. The performance of proposed method is verified by both classical Rician channel model and field measured industrial fading channel responses.

Original languageEnglish
Pages (from-to)776-788
Number of pages13
JournalIEEE Journal on Selected Areas in Communications
Volume41
Issue number3
Early online date16 Jan 2023
DOIs
Publication statusPublished - 16 Feb 2023

Keywords

  • Electrical and Electronic Engineering
  • Computer Networks and Communications

Cite this