Analyzing Convergence Aspects of Federated Learning: More Devices or More Network Layers?

Fazal Muhammad Ali Khan, Syed Ali Hassan, Rafay Iqbal Ansari, Haejoon Jung

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Federated learning has attracted considerable research interest to better shape the next-generation communication systems. Along this line, in this paper, different combinations of edge devices and convolutional layers of neural network are tested for global model convergence. We investigate the number of communication rounds (CRs) required to make a global model converge, when the number of convolutional layer channels and edge devices taking part in global model convergence varies. We observe the effects of additive white Gaussian noise (AWGN) on gradient vectors (GVs) that are shared with the parameter server (PS) through a wireless channel. Further, we add channel impairments and observe the CRs required to make the model converge. With higher values of noise power and channel impairments, even after exhausting the maximum number of CRs, the global model do not converges for lower number of edge devices and convolutional layer channels. However, if either or both the number of edge devices and convolutional layer channels are increased, the global model converges with substantially higher accuracy even for stronger noise and channel effects.

Original languageEnglish
Title of host publication2022 IEEE 95th Vehicular Technology Conference
Subtitle of host publicationVTC2022-Spring
Place of PublicationPiscataway, US
PublisherIEEE
ISBN (Electronic)9781665482431
DOIs
Publication statusPublished - Jun 2022

Publication series

NameIEEE Vehicular Technology Conference
Volume2022-June
ISSN (Print)1550-2252

Keywords

  • Federated learning
  • channel impairments
  • convergence
  • gradient vectors

Cite this