Model compression optimized neural network controller for nonlinear systems

Li Jiang Li, Sheng Lin Zhou, Fei Chao, Xiang Chang, Longzhi Yang, Xiao Yu*, Changjing Shang, Qiang Shen

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)
25 Downloads (Pure)

Abstract

Neural network-based controllers are widely used within the domain of robotic control systems. A network controller with more neurons typically achieves better performance, but an excessive number of neurons may make the model computationally intensive, resulting in slow dynamic responses in real-world environments. This paper reports a network compression method that integrates knowledge distillation technology for the development of concise neural network-based controllers to achieve a balance between the control performance and computational costs. The method first trains a full-size teacher model, which is then pruned, leading to a concise network with a minimum compromise of performance. From in this study, the resulting concise network is considered to be the prototype of a student model, which is further trained by a knowledge distillation process. The proposed compression method was applied to three classical networks, and the resultant compact controllers were tested on a robot manipulator for efficacy and potential demonstration. The experimental results from a comparative study confirm that the student models with fewer neurons resulting from the proposed model compression approach can achieve similar performance to that of the teacher models for intelligent dynamic control but with faster convergence speed.

Original languageEnglish
Article number110311
Number of pages9
JournalKnowledge-Based Systems
Volume265
Early online date14 Jan 2023
DOIs
Publication statusPublished - 8 Apr 2023

Keywords

  • Knowledge distillation
  • Model compression
  • Neural network pruning
  • Neural-network-based controller

Cite this