Improved Demonstration-Knowledge Utilization in Reinforcement Learning

Yanyu Liu, Yifeng Zeng, Biyang Ma, Yinghui Pan, Huifan Gao, Yuting Zhang

Research output: Contribution to journalArticlepeer-review

35 Downloads (Pure)

Abstract

Reinforcement learning (RL) has made great success in recent years. Generally, the learning process requires a huge amount of interaction with the environment before an agent can achieve acceptable performance. This motivates many techniques, such as incorporating prior knowledge which is usually presented as experts' demonstration, and using a probability distribution to represent state-and-action values, to accelerate the learning process. The methods perform well when the prior knowledge is genuinely correct and no much change occurs to the learning environment. However, the requirement is not perfectly realistic in many complex applications. The demonstration knowledge may not reflect the true environment and even be full of noise. In this article, we introduce a dynamic distribution merging method to improve knowledge utilization in a general RL algorithm, namely Q-learning. The new method adapts a normal distribution to represent state-action values and merges the prior and learned knowledge in a discriminative way. We theoretically analyze the new learning method and demonstrate its empirical performance over multiple problem domains.

Original languageEnglish
Article number3328848
Pages (from-to)2139-2150
Number of pages12
JournalIEEE Transactions on Artificial Intelligence
Volume5
Issue number5
Early online date3 Nov 2023
DOIs
Publication statusPublished - 1 May 2024

Keywords

  • Learning from demonstration
  • Q-learning
  • reinforcement learning

Cite this