Abstract
Reinforcement learning (RL) has made great success in recent years. Generally, the learning process requires a huge amount of interaction with the environment before an agent can achieve acceptable performance. This motivates many techniques, such as incorporating prior knowledge which is usually presented as experts' demonstration, and using a probability distribution to represent state-and-action values, to accelerate the learning process. The methods perform well when the prior knowledge is genuinely correct and no much change occurs to the learning environment. However, the requirement is not perfectly realistic in many complex applications. The demonstration knowledge may not reflect the true environment and even be full of noise. In this article, we introduce a dynamic distribution merging method to improve knowledge utilization in a general RL algorithm, namely Q-learning. The new method adapts a normal distribution to represent state-action values and merges the prior and learned knowledge in a discriminative way. We theoretically analyze the new learning method and demonstrate its empirical performance over multiple problem domains.
Original language | English |
---|---|
Article number | 3328848 |
Pages (from-to) | 2139-2150 |
Number of pages | 12 |
Journal | IEEE Transactions on Artificial Intelligence |
Volume | 5 |
Issue number | 5 |
Early online date | 3 Nov 2023 |
DOIs | |
Publication status | Published - 1 May 2024 |
Keywords
- Learning from demonstration
- Q-learning
- reinforcement learning